Mar 13 01:10:13.937300 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 01:10:14.784648 master-0 kubenswrapper[3985]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:10:14.784648 master-0 kubenswrapper[3985]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 01:10:14.784648 master-0 kubenswrapper[3985]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:10:14.786223 master-0 kubenswrapper[3985]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:10:14.786223 master-0 kubenswrapper[3985]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 01:10:14.786223 master-0 kubenswrapper[3985]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:10:14.786223 master-0 kubenswrapper[3985]: I0313 01:10:14.785681 3985 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 01:10:14.793614 master-0 kubenswrapper[3985]: W0313 01:10:14.793559 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:10:14.793614 master-0 kubenswrapper[3985]: W0313 01:10:14.793594 3985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:10:14.793614 master-0 kubenswrapper[3985]: W0313 01:10:14.793603 3985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:10:14.793614 master-0 kubenswrapper[3985]: W0313 01:10:14.793613 3985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:10:14.793614 master-0 kubenswrapper[3985]: W0313 01:10:14.793623 3985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793633 3985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793644 3985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793654 3985 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793662 3985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793674 3985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793684 3985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793693 3985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793735 3985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793744 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793752 3985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793760 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793767 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793776 3985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793784 3985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793793 3985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793800 3985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793808 3985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793816 3985 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793824 3985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:10:14.793915 master-0 kubenswrapper[3985]: W0313 01:10:14.793832 3985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793840 3985 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793848 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793858 3985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793868 3985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793878 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793886 3985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793894 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793902 3985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793910 3985 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793919 3985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793929 3985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793939 3985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793948 3985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793959 3985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793968 3985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793981 3985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793989 3985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.793997 3985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.794006 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:10:14.794910 master-0 kubenswrapper[3985]: W0313 01:10:14.794014 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794022 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794030 3985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794037 3985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794057 3985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794066 3985 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794074 3985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794082 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794090 3985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794098 3985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794105 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794113 3985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794121 3985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794128 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794137 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794145 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794153 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794160 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794168 3985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794175 3985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:10:14.795825 master-0 kubenswrapper[3985]: W0313 01:10:14.794183 3985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: W0313 01:10:14.794190 3985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: W0313 01:10:14.794198 3985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: W0313 01:10:14.794205 3985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: W0313 01:10:14.794212 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: W0313 01:10:14.794223 3985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: W0313 01:10:14.794233 3985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: W0313 01:10:14.794243 3985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795475 3985 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795499 3985 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795556 3985 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795569 3985 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795580 3985 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795591 3985 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795603 3985 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795614 3985 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795624 3985 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795633 3985 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795642 3985 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795652 3985 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795690 3985 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795700 3985 flags.go:64] FLAG: --cgroup-root="" Mar 13 01:10:14.796978 master-0 kubenswrapper[3985]: I0313 01:10:14.795708 3985 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795718 3985 flags.go:64] FLAG: --client-ca-file="" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795727 3985 flags.go:64] FLAG: --cloud-config="" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795736 3985 flags.go:64] FLAG: --cloud-provider="" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795746 3985 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795762 3985 flags.go:64] FLAG: --cluster-domain="" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795770 3985 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795780 3985 flags.go:64] FLAG: --config-dir="" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795788 3985 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795799 3985 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795810 3985 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795819 3985 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795829 3985 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795839 3985 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795849 3985 flags.go:64] FLAG: --contention-profiling="false" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795860 3985 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795869 3985 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795879 3985 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795888 3985 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795899 3985 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795909 3985 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795918 3985 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795926 3985 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795936 3985 flags.go:64] FLAG: --enable-server="true" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795945 3985 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 01:10:14.798038 master-0 kubenswrapper[3985]: I0313 01:10:14.795962 3985 flags.go:64] FLAG: --event-burst="100" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.795972 3985 flags.go:64] FLAG: --event-qps="50" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.795981 3985 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.795990 3985 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.795999 3985 flags.go:64] FLAG: --eviction-hard="" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796010 3985 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796019 3985 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796028 3985 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796037 3985 flags.go:64] FLAG: --eviction-soft="" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796059 3985 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796068 3985 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796084 3985 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796093 3985 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796102 3985 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796111 3985 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796120 3985 flags.go:64] FLAG: --feature-gates="" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796131 3985 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796141 3985 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796150 3985 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796159 3985 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796169 3985 flags.go:64] FLAG: --healthz-port="10248" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796180 3985 flags.go:64] FLAG: --help="false" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796189 3985 flags.go:64] FLAG: --hostname-override="" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796197 3985 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796212 3985 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 01:10:14.799560 master-0 kubenswrapper[3985]: I0313 01:10:14.796222 3985 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796230 3985 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796240 3985 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796249 3985 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796257 3985 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796266 3985 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796275 3985 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796284 3985 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796293 3985 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796302 3985 flags.go:64] FLAG: --kube-reserved="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796311 3985 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796320 3985 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796329 3985 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796339 3985 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796348 3985 flags.go:64] FLAG: --lock-file="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796356 3985 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796365 3985 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796375 3985 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796389 3985 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796399 3985 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796419 3985 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796428 3985 flags.go:64] FLAG: --logging-format="text" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796437 3985 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796447 3985 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796457 3985 flags.go:64] FLAG: --manifest-url="" Mar 13 01:10:14.800314 master-0 kubenswrapper[3985]: I0313 01:10:14.796465 3985 flags.go:64] FLAG: --manifest-url-header="" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796477 3985 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796486 3985 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796498 3985 flags.go:64] FLAG: --max-pods="110" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796507 3985 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796542 3985 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796551 3985 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796560 3985 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796569 3985 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796578 3985 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796588 3985 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796608 3985 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796617 3985 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796627 3985 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796636 3985 flags.go:64] FLAG: --pod-cidr="" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796645 3985 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796659 3985 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796668 3985 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796678 3985 flags.go:64] FLAG: --pods-per-core="0" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796687 3985 flags.go:64] FLAG: --port="10250" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796697 3985 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796706 3985 flags.go:64] FLAG: --provider-id="" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796714 3985 flags.go:64] FLAG: --qos-reserved="" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796724 3985 flags.go:64] FLAG: --read-only-port="10255" Mar 13 01:10:14.800842 master-0 kubenswrapper[3985]: I0313 01:10:14.796736 3985 flags.go:64] FLAG: --register-node="true" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796745 3985 flags.go:64] FLAG: --register-schedulable="true" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796755 3985 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796770 3985 flags.go:64] FLAG: --registry-burst="10" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796779 3985 flags.go:64] FLAG: --registry-qps="5" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796788 3985 flags.go:64] FLAG: --reserved-cpus="" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796796 3985 flags.go:64] FLAG: --reserved-memory="" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796819 3985 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796829 3985 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796838 3985 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796847 3985 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796856 3985 flags.go:64] FLAG: --runonce="false" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796865 3985 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796875 3985 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796884 3985 flags.go:64] FLAG: --seccomp-default="false" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796894 3985 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796903 3985 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796912 3985 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796922 3985 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796932 3985 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796942 3985 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796952 3985 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796962 3985 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796970 3985 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796980 3985 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 01:10:14.801331 master-0 kubenswrapper[3985]: I0313 01:10:14.796990 3985 flags.go:64] FLAG: --system-cgroups="" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797000 3985 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797014 3985 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797024 3985 flags.go:64] FLAG: --tls-cert-file="" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797033 3985 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797050 3985 flags.go:64] FLAG: --tls-min-version="" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797060 3985 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797070 3985 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797079 3985 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797088 3985 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797098 3985 flags.go:64] FLAG: --v="2" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797110 3985 flags.go:64] FLAG: --version="false" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797122 3985 flags.go:64] FLAG: --vmodule="" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797133 3985 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: I0313 01:10:14.797142 3985 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797421 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797433 3985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797441 3985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797463 3985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797474 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797482 3985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797491 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:10:14.801867 master-0 kubenswrapper[3985]: W0313 01:10:14.797500 3985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797534 3985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797545 3985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797555 3985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797564 3985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797572 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797580 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797588 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797596 3985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797604 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797612 3985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797619 3985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797627 3985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797634 3985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797642 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797650 3985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797657 3985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797666 3985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797674 3985 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:10:14.802328 master-0 kubenswrapper[3985]: W0313 01:10:14.797683 3985 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797691 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797699 3985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797707 3985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797714 3985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797722 3985 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797730 3985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797738 3985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797745 3985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797753 3985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797761 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797769 3985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797777 3985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797797 3985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797804 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797812 3985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797819 3985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797827 3985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797836 3985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797843 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:10:14.802767 master-0 kubenswrapper[3985]: W0313 01:10:14.797851 3985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797858 3985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797866 3985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797874 3985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797882 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797889 3985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797897 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797905 3985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797912 3985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797920 3985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797927 3985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797935 3985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797948 3985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797955 3985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797963 3985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797973 3985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797983 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.797992 3985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.798001 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.798010 3985 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:10:14.803181 master-0 kubenswrapper[3985]: W0313 01:10:14.798018 3985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:10:14.803663 master-0 kubenswrapper[3985]: W0313 01:10:14.798026 3985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:10:14.803663 master-0 kubenswrapper[3985]: W0313 01:10:14.798035 3985 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:10:14.803663 master-0 kubenswrapper[3985]: W0313 01:10:14.798044 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:10:14.803663 master-0 kubenswrapper[3985]: W0313 01:10:14.798054 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:10:14.803663 master-0 kubenswrapper[3985]: W0313 01:10:14.798065 3985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:10:14.803663 master-0 kubenswrapper[3985]: I0313 01:10:14.798960 3985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:10:14.816806 master-0 kubenswrapper[3985]: I0313 01:10:14.816712 3985 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 01:10:14.816806 master-0 kubenswrapper[3985]: I0313 01:10:14.816784 3985 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.816941 3985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.816960 3985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.816973 3985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.816983 3985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.816993 3985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817002 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817010 3985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817019 3985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817027 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817035 3985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817044 3985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817052 3985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817059 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817069 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:10:14.817059 master-0 kubenswrapper[3985]: W0313 01:10:14.817079 3985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817088 3985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817097 3985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817105 3985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817113 3985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817122 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817130 3985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817138 3985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817151 3985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817162 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817205 3985 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817214 3985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817222 3985 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817230 3985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817238 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817247 3985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817254 3985 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817262 3985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817270 3985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817279 3985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:10:14.817396 master-0 kubenswrapper[3985]: W0313 01:10:14.817287 3985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817295 3985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817303 3985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817312 3985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817322 3985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817335 3985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817344 3985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817353 3985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817360 3985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817368 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817377 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817385 3985 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817393 3985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817402 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817413 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817423 3985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817433 3985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817442 3985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817450 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817459 3985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:10:14.817882 master-0 kubenswrapper[3985]: W0313 01:10:14.817467 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817476 3985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817484 3985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817492 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817500 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817535 3985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817545 3985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817553 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817561 3985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817569 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817577 3985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817586 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817594 3985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817605 3985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817617 3985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817626 3985 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817635 3985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:10:14.818558 master-0 kubenswrapper[3985]: W0313 01:10:14.817647 3985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: I0313 01:10:14.817661 3985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.817955 3985 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.817974 3985 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.817984 3985 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.817995 3985 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818007 3985 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818016 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818025 3985 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818033 3985 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818042 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818051 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818059 3985 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818067 3985 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818076 3985 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:10:14.818939 master-0 kubenswrapper[3985]: W0313 01:10:14.818087 3985 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818095 3985 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818103 3985 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818114 3985 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818124 3985 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818133 3985 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818142 3985 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818150 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818158 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818166 3985 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818175 3985 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818183 3985 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818191 3985 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818200 3985 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818211 3985 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818220 3985 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818228 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818236 3985 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818244 3985 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:10:14.819271 master-0 kubenswrapper[3985]: W0313 01:10:14.818252 3985 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818260 3985 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818267 3985 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818275 3985 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818283 3985 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818291 3985 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818299 3985 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818310 3985 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818321 3985 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818331 3985 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818339 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818348 3985 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818357 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818366 3985 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818375 3985 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818383 3985 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818391 3985 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818400 3985 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818407 3985 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818416 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:10:14.819761 master-0 kubenswrapper[3985]: W0313 01:10:14.818424 3985 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818432 3985 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818440 3985 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818447 3985 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818455 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818463 3985 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818471 3985 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818479 3985 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818487 3985 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818495 3985 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818540 3985 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818548 3985 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818556 3985 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818565 3985 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818575 3985 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818584 3985 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818593 3985 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818601 3985 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818610 3985 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:10:14.820173 master-0 kubenswrapper[3985]: W0313 01:10:14.818618 3985 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:10:14.820585 master-0 kubenswrapper[3985]: I0313 01:10:14.818632 3985 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:10:14.820585 master-0 kubenswrapper[3985]: I0313 01:10:14.820073 3985 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 01:10:14.824646 master-0 kubenswrapper[3985]: I0313 01:10:14.824599 3985 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Mar 13 01:10:14.826174 master-0 kubenswrapper[3985]: I0313 01:10:14.826134 3985 server.go:997] "Starting client certificate rotation" Mar 13 01:10:14.826212 master-0 kubenswrapper[3985]: I0313 01:10:14.826185 3985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 01:10:14.826533 master-0 kubenswrapper[3985]: I0313 01:10:14.826462 3985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 01:10:14.857483 master-0 kubenswrapper[3985]: I0313 01:10:14.857383 3985 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 01:10:14.863600 master-0 kubenswrapper[3985]: I0313 01:10:14.863549 3985 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 01:10:14.865172 master-0 kubenswrapper[3985]: E0313 01:10:14.865082 3985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:14.890272 master-0 kubenswrapper[3985]: I0313 01:10:14.890198 3985 log.go:25] "Validated CRI v1 runtime API" Mar 13 01:10:14.896781 master-0 kubenswrapper[3985]: I0313 01:10:14.896721 3985 log.go:25] "Validated CRI v1 image API" Mar 13 01:10:14.900878 master-0 kubenswrapper[3985]: I0313 01:10:14.900811 3985 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 01:10:14.905746 master-0 kubenswrapper[3985]: I0313 01:10:14.905675 3985 fs.go:135] Filesystem UUIDs: map[157256f6-add8-4ac1-82d5-8fc6c96a0913:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 13 01:10:14.905746 master-0 kubenswrapper[3985]: I0313 01:10:14.905727 3985 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0}] Mar 13 01:10:14.939123 master-0 kubenswrapper[3985]: I0313 01:10:14.938560 3985 manager.go:217] Machine: {Timestamp:2026-03-13 01:10:14.935318479 +0000 UTC m=+0.811998773 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:3a0a52883c534d178c5b12dafb817e60 SystemUUID:3a0a5288-3c53-4d17-8c5b-12dafb817e60 BootID:b5890e11-c274-4f10-a685-d6fee1e9f87f Filesystems:[{Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:f6:d3:bd Speed:-1 Mtu:9000} {Name:ovs-system MacAddress:d6:2f:ab:d3:f0:10 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 01:10:14.939123 master-0 kubenswrapper[3985]: I0313 01:10:14.939021 3985 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 01:10:14.939543 master-0 kubenswrapper[3985]: I0313 01:10:14.939285 3985 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 01:10:14.941241 master-0 kubenswrapper[3985]: I0313 01:10:14.941186 3985 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 01:10:14.941662 master-0 kubenswrapper[3985]: I0313 01:10:14.941588 3985 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 01:10:14.942047 master-0 kubenswrapper[3985]: I0313 01:10:14.941653 3985 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 01:10:14.942153 master-0 kubenswrapper[3985]: I0313 01:10:14.942075 3985 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 01:10:14.942153 master-0 kubenswrapper[3985]: I0313 01:10:14.942097 3985 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 01:10:14.942153 master-0 kubenswrapper[3985]: I0313 01:10:14.942127 3985 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 01:10:14.942346 master-0 kubenswrapper[3985]: I0313 01:10:14.942171 3985 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 01:10:14.943174 master-0 kubenswrapper[3985]: I0313 01:10:14.943125 3985 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:10:14.943317 master-0 kubenswrapper[3985]: I0313 01:10:14.943283 3985 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 01:10:14.947464 master-0 kubenswrapper[3985]: I0313 01:10:14.947417 3985 kubelet.go:418] "Attempting to sync node with API server" Mar 13 01:10:14.947464 master-0 kubenswrapper[3985]: I0313 01:10:14.947452 3985 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 01:10:14.947653 master-0 kubenswrapper[3985]: I0313 01:10:14.947550 3985 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 01:10:14.947653 master-0 kubenswrapper[3985]: I0313 01:10:14.947576 3985 kubelet.go:324] "Adding apiserver pod source" Mar 13 01:10:14.947653 master-0 kubenswrapper[3985]: I0313 01:10:14.947599 3985 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 01:10:14.953725 master-0 kubenswrapper[3985]: I0313 01:10:14.953600 3985 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 01:10:14.954665 master-0 kubenswrapper[3985]: W0313 01:10:14.954567 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:14.954778 master-0 kubenswrapper[3985]: E0313 01:10:14.954685 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:14.954778 master-0 kubenswrapper[3985]: W0313 01:10:14.954686 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:14.954887 master-0 kubenswrapper[3985]: E0313 01:10:14.954830 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:14.957109 master-0 kubenswrapper[3985]: I0313 01:10:14.957060 3985 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 01:10:14.957503 master-0 kubenswrapper[3985]: I0313 01:10:14.957452 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 01:10:14.957503 master-0 kubenswrapper[3985]: I0313 01:10:14.957497 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957541 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957577 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957593 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957608 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957623 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957638 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957654 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957667 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 01:10:14.957695 master-0 kubenswrapper[3985]: I0313 01:10:14.957712 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 01:10:14.958269 master-0 kubenswrapper[3985]: I0313 01:10:14.957737 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 01:10:14.958810 master-0 kubenswrapper[3985]: I0313 01:10:14.958757 3985 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 01:10:14.959644 master-0 kubenswrapper[3985]: I0313 01:10:14.959598 3985 server.go:1280] "Started kubelet" Mar 13 01:10:14.959947 master-0 kubenswrapper[3985]: I0313 01:10:14.959881 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:14.961723 master-0 kubenswrapper[3985]: I0313 01:10:14.961321 3985 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 01:10:14.961723 master-0 kubenswrapper[3985]: I0313 01:10:14.961362 3985 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 01:10:14.961723 master-0 kubenswrapper[3985]: I0313 01:10:14.961611 3985 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 01:10:14.962227 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 01:10:14.963205 master-0 kubenswrapper[3985]: I0313 01:10:14.962255 3985 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 01:10:14.964366 master-0 kubenswrapper[3985]: I0313 01:10:14.964320 3985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 01:10:14.964500 master-0 kubenswrapper[3985]: I0313 01:10:14.964377 3985 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 01:10:14.965724 master-0 kubenswrapper[3985]: I0313 01:10:14.964746 3985 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 01:10:14.965724 master-0 kubenswrapper[3985]: E0313 01:10:14.964732 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:14.965724 master-0 kubenswrapper[3985]: I0313 01:10:14.964775 3985 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 01:10:14.965724 master-0 kubenswrapper[3985]: I0313 01:10:14.964796 3985 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 01:10:14.968447 master-0 kubenswrapper[3985]: I0313 01:10:14.966105 3985 reconstruct.go:97] "Volume reconstruction finished" Mar 13 01:10:14.968447 master-0 kubenswrapper[3985]: I0313 01:10:14.966133 3985 reconciler.go:26] "Reconciler: start to sync state" Mar 13 01:10:14.968447 master-0 kubenswrapper[3985]: W0313 01:10:14.966667 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:14.968447 master-0 kubenswrapper[3985]: E0313 01:10:14.966818 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:14.968447 master-0 kubenswrapper[3985]: E0313 01:10:14.967965 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 01:10:14.968447 master-0 kubenswrapper[3985]: I0313 01:10:14.968230 3985 factory.go:55] Registering systemd factory Mar 13 01:10:14.968447 master-0 kubenswrapper[3985]: I0313 01:10:14.968283 3985 factory.go:221] Registration of the systemd container factory successfully Mar 13 01:10:14.975781 master-0 kubenswrapper[3985]: I0313 01:10:14.974279 3985 factory.go:153] Registering CRI-O factory Mar 13 01:10:14.975781 master-0 kubenswrapper[3985]: I0313 01:10:14.974332 3985 factory.go:221] Registration of the crio container factory successfully Mar 13 01:10:14.975781 master-0 kubenswrapper[3985]: I0313 01:10:14.974459 3985 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 01:10:14.975781 master-0 kubenswrapper[3985]: I0313 01:10:14.974590 3985 factory.go:103] Registering Raw factory Mar 13 01:10:14.975781 master-0 kubenswrapper[3985]: I0313 01:10:14.974652 3985 manager.go:1196] Started watching for new ooms in manager Mar 13 01:10:14.976847 master-0 kubenswrapper[3985]: I0313 01:10:14.976747 3985 server.go:449] "Adding debug handlers to kubelet server" Mar 13 01:10:14.979391 master-0 kubenswrapper[3985]: I0313 01:10:14.978442 3985 manager.go:319] Starting recovery of all containers Mar 13 01:10:14.980187 master-0 kubenswrapper[3985]: E0313 01:10:14.978398 3985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c415b9a0d93e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:14.959543271 +0000 UTC m=+0.836223525,LastTimestamp:2026-03-13 01:10:14.959543271 +0000 UTC m=+0.836223525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:14.982733 master-0 kubenswrapper[3985]: E0313 01:10:14.982698 3985 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 01:10:15.012031 master-0 kubenswrapper[3985]: I0313 01:10:15.011716 3985 manager.go:324] Recovery completed Mar 13 01:10:15.023315 master-0 kubenswrapper[3985]: I0313 01:10:15.023281 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.025589 master-0 kubenswrapper[3985]: I0313 01:10:15.025546 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.025677 master-0 kubenswrapper[3985]: I0313 01:10:15.025622 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.025677 master-0 kubenswrapper[3985]: I0313 01:10:15.025640 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.026999 master-0 kubenswrapper[3985]: I0313 01:10:15.026973 3985 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 01:10:15.027092 master-0 kubenswrapper[3985]: I0313 01:10:15.027075 3985 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 01:10:15.027173 master-0 kubenswrapper[3985]: I0313 01:10:15.027161 3985 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:10:15.065049 master-0 kubenswrapper[3985]: E0313 01:10:15.064904 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:15.080230 master-0 kubenswrapper[3985]: I0313 01:10:15.080208 3985 policy_none.go:49] "None policy: Start" Mar 13 01:10:15.081840 master-0 kubenswrapper[3985]: I0313 01:10:15.081808 3985 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 01:10:15.081928 master-0 kubenswrapper[3985]: I0313 01:10:15.081851 3985 state_mem.go:35] "Initializing new in-memory state store" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.164494 3985 manager.go:334] "Starting Device Plugin manager" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.164800 3985 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.164862 3985 server.go:79] "Starting device plugin registration server" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: E0313 01:10:15.165062 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.165671 3985 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.165695 3985 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.165894 3985 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.166138 3985 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.166166 3985 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: E0313 01:10:15.169161 3985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: E0313 01:10:15.171128 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.171993 3985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.175837 3985 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.175964 3985 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: I0313 01:10:15.176110 3985 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 01:10:15.179472 master-0 kubenswrapper[3985]: E0313 01:10:15.176224 3985 kubelet.go:2359] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 13 01:10:15.180350 master-0 kubenswrapper[3985]: W0313 01:10:15.180038 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:15.180350 master-0 kubenswrapper[3985]: E0313 01:10:15.180137 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:15.266470 master-0 kubenswrapper[3985]: I0313 01:10:15.266364 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.268260 master-0 kubenswrapper[3985]: I0313 01:10:15.268206 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.268349 master-0 kubenswrapper[3985]: I0313 01:10:15.268283 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.268349 master-0 kubenswrapper[3985]: I0313 01:10:15.268302 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.268439 master-0 kubenswrapper[3985]: I0313 01:10:15.268354 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:15.270002 master-0 kubenswrapper[3985]: E0313 01:10:15.269937 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 01:10:15.277098 master-0 kubenswrapper[3985]: I0313 01:10:15.277041 3985 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 01:10:15.277257 master-0 kubenswrapper[3985]: I0313 01:10:15.277172 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.278612 master-0 kubenswrapper[3985]: I0313 01:10:15.278578 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.278691 master-0 kubenswrapper[3985]: I0313 01:10:15.278630 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.278691 master-0 kubenswrapper[3985]: I0313 01:10:15.278648 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.278890 master-0 kubenswrapper[3985]: I0313 01:10:15.278849 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.279439 master-0 kubenswrapper[3985]: I0313 01:10:15.279393 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.279495 master-0 kubenswrapper[3985]: I0313 01:10:15.279478 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.280386 master-0 kubenswrapper[3985]: I0313 01:10:15.280359 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.280450 master-0 kubenswrapper[3985]: I0313 01:10:15.280402 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.280494 master-0 kubenswrapper[3985]: I0313 01:10:15.280451 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.280494 master-0 kubenswrapper[3985]: I0313 01:10:15.280469 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.280629 master-0 kubenswrapper[3985]: I0313 01:10:15.280538 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.280629 master-0 kubenswrapper[3985]: I0313 01:10:15.280558 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.280749 master-0 kubenswrapper[3985]: I0313 01:10:15.280713 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.280800 master-0 kubenswrapper[3985]: I0313 01:10:15.280756 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.280922 master-0 kubenswrapper[3985]: I0313 01:10:15.280884 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.282084 master-0 kubenswrapper[3985]: I0313 01:10:15.282037 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.282084 master-0 kubenswrapper[3985]: I0313 01:10:15.282058 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.282084 master-0 kubenswrapper[3985]: I0313 01:10:15.282080 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.282235 master-0 kubenswrapper[3985]: I0313 01:10:15.282090 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.282235 master-0 kubenswrapper[3985]: I0313 01:10:15.282105 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.282235 master-0 kubenswrapper[3985]: I0313 01:10:15.282113 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.282364 master-0 kubenswrapper[3985]: I0313 01:10:15.282268 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.282709 master-0 kubenswrapper[3985]: I0313 01:10:15.282653 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.282785 master-0 kubenswrapper[3985]: I0313 01:10:15.282758 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.283314 master-0 kubenswrapper[3985]: I0313 01:10:15.283280 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.283389 master-0 kubenswrapper[3985]: I0313 01:10:15.283336 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.283389 master-0 kubenswrapper[3985]: I0313 01:10:15.283357 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.283559 master-0 kubenswrapper[3985]: I0313 01:10:15.283498 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.283820 master-0 kubenswrapper[3985]: I0313 01:10:15.283781 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.283887 master-0 kubenswrapper[3985]: I0313 01:10:15.283860 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.285255 master-0 kubenswrapper[3985]: I0313 01:10:15.285225 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.285335 master-0 kubenswrapper[3985]: I0313 01:10:15.285274 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.285335 master-0 kubenswrapper[3985]: I0313 01:10:15.285295 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.285444 master-0 kubenswrapper[3985]: I0313 01:10:15.285391 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.285504 master-0 kubenswrapper[3985]: I0313 01:10:15.285463 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.285504 master-0 kubenswrapper[3985]: I0313 01:10:15.285484 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.285613 master-0 kubenswrapper[3985]: I0313 01:10:15.285557 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.285654 master-0 kubenswrapper[3985]: I0313 01:10:15.285604 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.285654 master-0 kubenswrapper[3985]: I0313 01:10:15.285625 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.285732 master-0 kubenswrapper[3985]: I0313 01:10:15.285655 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.285770 master-0 kubenswrapper[3985]: I0313 01:10:15.285746 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.286775 master-0 kubenswrapper[3985]: I0313 01:10:15.286735 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.286835 master-0 kubenswrapper[3985]: I0313 01:10:15.286784 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.286835 master-0 kubenswrapper[3985]: I0313 01:10:15.286801 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.368684 master-0 kubenswrapper[3985]: I0313 01:10:15.368614 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.368684 master-0 kubenswrapper[3985]: I0313 01:10:15.368692 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.368739 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.368779 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.368820 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.368861 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.368898 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.368949 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.369000 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.369095 master-0 kubenswrapper[3985]: I0313 01:10:15.369078 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.369406 master-0 kubenswrapper[3985]: I0313 01:10:15.369253 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.369406 master-0 kubenswrapper[3985]: I0313 01:10:15.369365 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.369494 master-0 kubenswrapper[3985]: I0313 01:10:15.369431 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.369602 master-0 kubenswrapper[3985]: I0313 01:10:15.369499 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.369700 master-0 kubenswrapper[3985]: I0313 01:10:15.369630 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.369833 master-0 kubenswrapper[3985]: I0313 01:10:15.369766 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.369897 master-0 kubenswrapper[3985]: I0313 01:10:15.369860 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.470878 master-0 kubenswrapper[3985]: I0313 01:10:15.470795 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.471138 master-0 kubenswrapper[3985]: I0313 01:10:15.471065 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.471197 master-0 kubenswrapper[3985]: I0313 01:10:15.471175 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.471234 master-0 kubenswrapper[3985]: I0313 01:10:15.471207 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.471234 master-0 kubenswrapper[3985]: I0313 01:10:15.471234 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471309 master-0 kubenswrapper[3985]: I0313 01:10:15.471256 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471309 master-0 kubenswrapper[3985]: I0313 01:10:15.471262 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.471364 master-0 kubenswrapper[3985]: I0313 01:10:15.471277 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471364 master-0 kubenswrapper[3985]: I0313 01:10:15.471341 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471419 master-0 kubenswrapper[3985]: I0313 01:10:15.471375 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.471453 master-0 kubenswrapper[3985]: I0313 01:10:15.471433 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471485 master-0 kubenswrapper[3985]: I0313 01:10:15.471469 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.471580 master-0 kubenswrapper[3985]: I0313 01:10:15.471422 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.471613 master-0 kubenswrapper[3985]: I0313 01:10:15.471591 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.471653 master-0 kubenswrapper[3985]: I0313 01:10:15.471463 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.471688 master-0 kubenswrapper[3985]: I0313 01:10:15.471654 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471688 master-0 kubenswrapper[3985]: I0313 01:10:15.471622 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471742 master-0 kubenswrapper[3985]: I0313 01:10:15.471607 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.471742 master-0 kubenswrapper[3985]: I0313 01:10:15.471659 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.471742 master-0 kubenswrapper[3985]: I0313 01:10:15.471607 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.471825 master-0 kubenswrapper[3985]: I0313 01:10:15.471750 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471953 master-0 kubenswrapper[3985]: I0313 01:10:15.471902 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.471992 master-0 kubenswrapper[3985]: I0313 01:10:15.471969 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.472051 master-0 kubenswrapper[3985]: I0313 01:10:15.472020 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.472085 master-0 kubenswrapper[3985]: I0313 01:10:15.472068 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.472121 master-0 kubenswrapper[3985]: I0313 01:10:15.472100 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.472152 master-0 kubenswrapper[3985]: I0313 01:10:15.472115 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.472152 master-0 kubenswrapper[3985]: I0313 01:10:15.472134 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.472221 master-0 kubenswrapper[3985]: I0313 01:10:15.472165 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.472221 master-0 kubenswrapper[3985]: I0313 01:10:15.472160 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.472280 master-0 kubenswrapper[3985]: I0313 01:10:15.472210 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.472383 master-0 kubenswrapper[3985]: I0313 01:10:15.472287 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.472422 master-0 kubenswrapper[3985]: I0313 01:10:15.472399 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.472422 master-0 kubenswrapper[3985]: I0313 01:10:15.472307 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.472480 master-0 kubenswrapper[3985]: I0313 01:10:15.472330 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.472480 master-0 kubenswrapper[3985]: I0313 01:10:15.472361 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.472565 master-0 kubenswrapper[3985]: I0313 01:10:15.472479 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.472565 master-0 kubenswrapper[3985]: I0313 01:10:15.472552 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.472719 master-0 kubenswrapper[3985]: I0313 01:10:15.472688 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:15.473940 master-0 kubenswrapper[3985]: E0313 01:10:15.473876 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 01:10:15.573175 master-0 kubenswrapper[3985]: E0313 01:10:15.573075 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 01:10:15.622483 master-0 kubenswrapper[3985]: I0313 01:10:15.622247 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:10:15.638126 master-0 kubenswrapper[3985]: I0313 01:10:15.638063 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:10:15.668614 master-0 kubenswrapper[3985]: I0313 01:10:15.668501 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:15.693156 master-0 kubenswrapper[3985]: I0313 01:10:15.693056 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:15.704428 master-0 kubenswrapper[3985]: I0313 01:10:15.704358 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:10:15.770909 master-0 kubenswrapper[3985]: E0313 01:10:15.770696 3985 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/default/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{master-0.189c415b9a0d93e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:14.959543271 +0000 UTC m=+0.836223525,LastTimestamp:2026-03-13 01:10:14.959543271 +0000 UTC m=+0.836223525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:15.875032 master-0 kubenswrapper[3985]: I0313 01:10:15.874826 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:15.877127 master-0 kubenswrapper[3985]: I0313 01:10:15.877057 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:15.877238 master-0 kubenswrapper[3985]: I0313 01:10:15.877138 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:15.877238 master-0 kubenswrapper[3985]: I0313 01:10:15.877160 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:15.877352 master-0 kubenswrapper[3985]: I0313 01:10:15.877262 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:15.878744 master-0 kubenswrapper[3985]: E0313 01:10:15.878673 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 01:10:15.914425 master-0 kubenswrapper[3985]: W0313 01:10:15.914277 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:15.914604 master-0 kubenswrapper[3985]: E0313 01:10:15.914425 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:15.961782 master-0 kubenswrapper[3985]: I0313 01:10:15.961678 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:15.962727 master-0 kubenswrapper[3985]: W0313 01:10:15.962624 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:15.962807 master-0 kubenswrapper[3985]: E0313 01:10:15.962744 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:16.036301 master-0 kubenswrapper[3985]: W0313 01:10:16.036136 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:16.036301 master-0 kubenswrapper[3985]: E0313 01:10:16.036292 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:16.311468 master-0 kubenswrapper[3985]: W0313 01:10:16.311330 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:16.311468 master-0 kubenswrapper[3985]: E0313 01:10:16.311431 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:16.374707 master-0 kubenswrapper[3985]: E0313 01:10:16.374606 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 01:10:16.437336 master-0 kubenswrapper[3985]: W0313 01:10:16.437205 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod354f29997baa583b6238f7de9108ee10.slice/crio-6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1 WatchSource:0}: Error finding container 6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1: Status 404 returned error can't find the container with id 6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1 Mar 13 01:10:16.449306 master-0 kubenswrapper[3985]: I0313 01:10:16.449232 3985 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 01:10:16.463789 master-0 kubenswrapper[3985]: W0313 01:10:16.463705 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf78c05e1499b533b83f091333d61f045.slice/crio-4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186 WatchSource:0}: Error finding container 4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186: Status 404 returned error can't find the container with id 4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186 Mar 13 01:10:16.555436 master-0 kubenswrapper[3985]: W0313 01:10:16.555366 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode9add8df47182fc2eaf8cd78016ebe72.slice/crio-365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846 WatchSource:0}: Error finding container 365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846: Status 404 returned error can't find the container with id 365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846 Mar 13 01:10:16.595173 master-0 kubenswrapper[3985]: W0313 01:10:16.595081 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1a56802af72ce1aac6b5077f1695ac0.slice/crio-c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e WatchSource:0}: Error finding container c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e: Status 404 returned error can't find the container with id c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e Mar 13 01:10:16.679916 master-0 kubenswrapper[3985]: I0313 01:10:16.679786 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:16.681448 master-0 kubenswrapper[3985]: I0313 01:10:16.681402 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:16.681656 master-0 kubenswrapper[3985]: I0313 01:10:16.681468 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:16.681656 master-0 kubenswrapper[3985]: I0313 01:10:16.681494 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:16.681656 master-0 kubenswrapper[3985]: I0313 01:10:16.681612 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:16.682886 master-0 kubenswrapper[3985]: E0313 01:10:16.682808 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 01:10:16.962347 master-0 kubenswrapper[3985]: I0313 01:10:16.962157 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:17.029534 master-0 kubenswrapper[3985]: I0313 01:10:17.029415 3985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 01:10:17.031458 master-0 kubenswrapper[3985]: E0313 01:10:17.031389 3985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:17.186880 master-0 kubenswrapper[3985]: I0313 01:10:17.186752 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846"} Mar 13 01:10:17.188856 master-0 kubenswrapper[3985]: I0313 01:10:17.188764 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"96f6e8e91d7109dc966f1dd2cbd1b74212480a19ccee4443647cc163d94cfaba"} Mar 13 01:10:17.191572 master-0 kubenswrapper[3985]: I0313 01:10:17.191500 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186"} Mar 13 01:10:17.193074 master-0 kubenswrapper[3985]: I0313 01:10:17.193017 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1"} Mar 13 01:10:17.194975 master-0 kubenswrapper[3985]: I0313 01:10:17.194939 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e"} Mar 13 01:10:17.887153 master-0 kubenswrapper[3985]: W0313 01:10:17.886527 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:17.887153 master-0 kubenswrapper[3985]: E0313 01:10:17.887108 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:17.961349 master-0 kubenswrapper[3985]: I0313 01:10:17.961268 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:17.975650 master-0 kubenswrapper[3985]: E0313 01:10:17.975583 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 01:10:18.029432 master-0 kubenswrapper[3985]: W0313 01:10:18.029316 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:18.029718 master-0 kubenswrapper[3985]: E0313 01:10:18.029436 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:18.283423 master-0 kubenswrapper[3985]: I0313 01:10:18.283353 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:18.284831 master-0 kubenswrapper[3985]: I0313 01:10:18.284768 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:18.284901 master-0 kubenswrapper[3985]: I0313 01:10:18.284839 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:18.284901 master-0 kubenswrapper[3985]: I0313 01:10:18.284871 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:18.285025 master-0 kubenswrapper[3985]: I0313 01:10:18.284980 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:18.286350 master-0 kubenswrapper[3985]: E0313 01:10:18.286274 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 01:10:18.590164 master-0 kubenswrapper[3985]: W0313 01:10:18.589950 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:18.590164 master-0 kubenswrapper[3985]: E0313 01:10:18.590095 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:18.655931 master-0 kubenswrapper[3985]: W0313 01:10:18.655812 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:18.656189 master-0 kubenswrapper[3985]: E0313 01:10:18.655943 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:18.961619 master-0 kubenswrapper[3985]: I0313 01:10:18.961463 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:19.963302 master-0 kubenswrapper[3985]: I0313 01:10:19.963164 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:20.209856 master-0 kubenswrapper[3985]: I0313 01:10:20.209792 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"a665d6a554bcc038bf3cf3aa905f1884c4c54fb9c32ce798ba9ecbaf1bab11e0"} Mar 13 01:10:20.209856 master-0 kubenswrapper[3985]: I0313 01:10:20.209859 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"2a446f182b10829874f21b28a6050799a0e95cf3b7880d6db31740a7140ff67b"} Mar 13 01:10:20.209856 master-0 kubenswrapper[3985]: I0313 01:10:20.209860 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:20.210957 master-0 kubenswrapper[3985]: I0313 01:10:20.210912 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:20.211023 master-0 kubenswrapper[3985]: I0313 01:10:20.210961 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:20.211023 master-0 kubenswrapper[3985]: I0313 01:10:20.210973 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:20.212407 master-0 kubenswrapper[3985]: I0313 01:10:20.212359 3985 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="a0021a247a97b068e059ad5f822a94ffb91a3ed3409e6c3e37ac414a6210ce2d" exitCode=0 Mar 13 01:10:20.212468 master-0 kubenswrapper[3985]: I0313 01:10:20.212420 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"a0021a247a97b068e059ad5f822a94ffb91a3ed3409e6c3e37ac414a6210ce2d"} Mar 13 01:10:20.212578 master-0 kubenswrapper[3985]: I0313 01:10:20.212538 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:20.213471 master-0 kubenswrapper[3985]: I0313 01:10:20.213403 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:20.213471 master-0 kubenswrapper[3985]: I0313 01:10:20.213435 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:20.213471 master-0 kubenswrapper[3985]: I0313 01:10:20.213444 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:20.964821 master-0 kubenswrapper[3985]: I0313 01:10:20.964772 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:21.177802 master-0 kubenswrapper[3985]: E0313 01:10:21.177710 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 01:10:21.217432 master-0 kubenswrapper[3985]: I0313 01:10:21.217379 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 01:10:21.218181 master-0 kubenswrapper[3985]: I0313 01:10:21.218104 3985 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="2e139e71556551fed6a9afc86606d4082efe8f9a39ebec88b88a39e467075896" exitCode=1 Mar 13 01:10:21.218256 master-0 kubenswrapper[3985]: I0313 01:10:21.218192 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"2e139e71556551fed6a9afc86606d4082efe8f9a39ebec88b88a39e467075896"} Mar 13 01:10:21.218256 master-0 kubenswrapper[3985]: I0313 01:10:21.218240 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:21.218371 master-0 kubenswrapper[3985]: I0313 01:10:21.218307 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:21.219317 master-0 kubenswrapper[3985]: I0313 01:10:21.219286 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:21.219317 master-0 kubenswrapper[3985]: I0313 01:10:21.219315 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:21.219458 master-0 kubenswrapper[3985]: I0313 01:10:21.219325 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:21.219780 master-0 kubenswrapper[3985]: I0313 01:10:21.219753 3985 scope.go:117] "RemoveContainer" containerID="2e139e71556551fed6a9afc86606d4082efe8f9a39ebec88b88a39e467075896" Mar 13 01:10:21.221575 master-0 kubenswrapper[3985]: I0313 01:10:21.221147 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:21.221575 master-0 kubenswrapper[3985]: I0313 01:10:21.221187 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:21.221575 master-0 kubenswrapper[3985]: I0313 01:10:21.221201 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:21.335844 master-0 kubenswrapper[3985]: I0313 01:10:21.335728 3985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 01:10:21.337096 master-0 kubenswrapper[3985]: E0313 01:10:21.337054 3985 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.sno.openstack.lab:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:21.486937 master-0 kubenswrapper[3985]: I0313 01:10:21.486886 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:21.488663 master-0 kubenswrapper[3985]: I0313 01:10:21.488633 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:21.488819 master-0 kubenswrapper[3985]: I0313 01:10:21.488671 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:21.488819 master-0 kubenswrapper[3985]: I0313 01:10:21.488687 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:21.488819 master-0 kubenswrapper[3985]: I0313 01:10:21.488750 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:21.489961 master-0 kubenswrapper[3985]: E0313 01:10:21.489898 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/nodes\": dial tcp 192.168.32.10:6443: connect: connection refused" node="master-0" Mar 13 01:10:21.677947 master-0 kubenswrapper[3985]: W0313 01:10:21.677812 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:21.677947 master-0 kubenswrapper[3985]: E0313 01:10:21.677893 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:21.962001 master-0 kubenswrapper[3985]: I0313 01:10:21.961877 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:22.022316 master-0 kubenswrapper[3985]: W0313 01:10:22.022256 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:22.022811 master-0 kubenswrapper[3985]: E0313 01:10:22.022340 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes?fieldSelector=metadata.name%3Dmaster-0&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:22.474196 master-0 kubenswrapper[3985]: W0313 01:10:22.474118 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:22.474487 master-0 kubenswrapper[3985]: E0313 01:10:22.474209 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.sno.openstack.lab:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:22.961978 master-0 kubenswrapper[3985]: I0313 01:10:22.961911 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:23.510045 master-0 kubenswrapper[3985]: W0313 01:10:23.509964 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:23.510675 master-0 kubenswrapper[3985]: E0313 01:10:23.510058 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.sno.openstack.lab:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 192.168.32.10:6443: connect: connection refused" logger="UnhandledError" Mar 13 01:10:23.961775 master-0 kubenswrapper[3985]: I0313 01:10:23.961687 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:24.963208 master-0 kubenswrapper[3985]: I0313 01:10:24.962501 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.sno.openstack.lab:6443/apis/storage.k8s.io/v1/csinodes/master-0?resourceVersion=0": dial tcp 192.168.32.10:6443: connect: connection refused Mar 13 01:10:25.169594 master-0 kubenswrapper[3985]: E0313 01:10:25.169486 3985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 01:10:25.230374 master-0 kubenswrapper[3985]: I0313 01:10:25.230303 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"41a562ba2a46ef687ff091bc533dc160a94bdc1572141710b80e92f2c08eb013"} Mar 13 01:10:25.230600 master-0 kubenswrapper[3985]: I0313 01:10:25.230351 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:25.232239 master-0 kubenswrapper[3985]: I0313 01:10:25.231641 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:25.232239 master-0 kubenswrapper[3985]: I0313 01:10:25.231687 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:25.232239 master-0 kubenswrapper[3985]: I0313 01:10:25.231699 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:25.234561 master-0 kubenswrapper[3985]: I0313 01:10:25.234503 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 01:10:25.235185 master-0 kubenswrapper[3985]: I0313 01:10:25.235154 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/0.log" Mar 13 01:10:25.236127 master-0 kubenswrapper[3985]: I0313 01:10:25.236084 3985 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="fce486a6b99328c3c89e02cae5671347b19080082ab67d0e6384e847054c54b8" exitCode=1 Mar 13 01:10:25.236191 master-0 kubenswrapper[3985]: I0313 01:10:25.236164 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"fce486a6b99328c3c89e02cae5671347b19080082ab67d0e6384e847054c54b8"} Mar 13 01:10:25.236263 master-0 kubenswrapper[3985]: I0313 01:10:25.236227 3985 scope.go:117] "RemoveContainer" containerID="2e139e71556551fed6a9afc86606d4082efe8f9a39ebec88b88a39e467075896" Mar 13 01:10:25.236454 master-0 kubenswrapper[3985]: I0313 01:10:25.236418 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:25.240889 master-0 kubenswrapper[3985]: I0313 01:10:25.240790 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:25.240889 master-0 kubenswrapper[3985]: I0313 01:10:25.240837 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:25.240889 master-0 kubenswrapper[3985]: I0313 01:10:25.240853 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:25.241373 master-0 kubenswrapper[3985]: I0313 01:10:25.241329 3985 scope.go:117] "RemoveContainer" containerID="fce486a6b99328c3c89e02cae5671347b19080082ab67d0e6384e847054c54b8" Mar 13 01:10:25.241634 master-0 kubenswrapper[3985]: E0313 01:10:25.241594 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 01:10:25.242825 master-0 kubenswrapper[3985]: I0313 01:10:25.242768 3985 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="b6ea782ca75304abc2ccc9ab19e6d9b4a2889fe649ebf475c9c95d91d8dba102" exitCode=0 Mar 13 01:10:25.242903 master-0 kubenswrapper[3985]: I0313 01:10:25.242886 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"b6ea782ca75304abc2ccc9ab19e6d9b4a2889fe649ebf475c9c95d91d8dba102"} Mar 13 01:10:25.243832 master-0 kubenswrapper[3985]: I0313 01:10:25.243003 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:25.243832 master-0 kubenswrapper[3985]: I0313 01:10:25.243724 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:25.243832 master-0 kubenswrapper[3985]: I0313 01:10:25.243750 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:25.243832 master-0 kubenswrapper[3985]: I0313 01:10:25.243764 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:25.248879 master-0 kubenswrapper[3985]: I0313 01:10:25.247703 3985 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="bbc1eef4848241d60b2e14297f83c2738656d477e9aca36b48290bd2306fa11f" exitCode=1 Mar 13 01:10:25.248879 master-0 kubenswrapper[3985]: I0313 01:10:25.247759 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"bbc1eef4848241d60b2e14297f83c2738656d477e9aca36b48290bd2306fa11f"} Mar 13 01:10:25.248879 master-0 kubenswrapper[3985]: I0313 01:10:25.248361 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:25.249462 master-0 kubenswrapper[3985]: I0313 01:10:25.249413 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:25.249462 master-0 kubenswrapper[3985]: I0313 01:10:25.249454 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:25.249585 master-0 kubenswrapper[3985]: I0313 01:10:25.249471 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:26.251956 master-0 kubenswrapper[3985]: I0313 01:10:26.251900 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"6c9bd5245949231d7973259139b8774c20bbb32018502eb3bd133d4e8aa89584"} Mar 13 01:10:26.253485 master-0 kubenswrapper[3985]: I0313 01:10:26.253457 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 01:10:26.255097 master-0 kubenswrapper[3985]: I0313 01:10:26.254616 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:26.255097 master-0 kubenswrapper[3985]: I0313 01:10:26.254710 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:26.255432 master-0 kubenswrapper[3985]: I0313 01:10:26.255338 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:26.255432 master-0 kubenswrapper[3985]: I0313 01:10:26.255364 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:26.255432 master-0 kubenswrapper[3985]: I0313 01:10:26.255375 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:26.256004 master-0 kubenswrapper[3985]: I0313 01:10:26.255872 3985 scope.go:117] "RemoveContainer" containerID="fce486a6b99328c3c89e02cae5671347b19080082ab67d0e6384e847054c54b8" Mar 13 01:10:26.256045 master-0 kubenswrapper[3985]: E0313 01:10:26.256003 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 01:10:26.256426 master-0 kubenswrapper[3985]: I0313 01:10:26.256395 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:26.256464 master-0 kubenswrapper[3985]: I0313 01:10:26.256442 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:26.256464 master-0 kubenswrapper[3985]: I0313 01:10:26.256462 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:27.089643 master-0 kubenswrapper[3985]: I0313 01:10:27.089577 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:27.090210 master-0 kubenswrapper[3985]: E0313 01:10:27.089975 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9a0d93e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:14.959543271 +0000 UTC m=+0.836223525,LastTimestamp:2026-03-13 01:10:14.959543271 +0000 UTC m=+0.836223525,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.099345 master-0 kubenswrapper[3985]: E0313 01:10:27.098741 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.110573 master-0 kubenswrapper[3985]: E0313 01:10:27.107545 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.118253 master-0 kubenswrapper[3985]: E0313 01:10:27.117264 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe3e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,LastTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.141573 master-0 kubenswrapper[3985]: E0313 01:10:27.139944 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415ba6b979e1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.172135393 +0000 UTC m=+1.048815617,LastTimestamp:2026-03-13 01:10:15.172135393 +0000 UTC m=+1.048815617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.152536 master-0 kubenswrapper[3985]: E0313 01:10:27.152374 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfd37a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.268266509 +0000 UTC m=+1.144946753,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.157988 master-0 kubenswrapper[3985]: E0313 01:10:27.157910 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe06c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.268294836 +0000 UTC m=+1.144975080,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.162589 master-0 kubenswrapper[3985]: E0313 01:10:27.162435 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe3e22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe3e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,LastTimestamp:2026-03-13 01:10:15.268311734 +0000 UTC m=+1.144991977,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.166746 master-0 kubenswrapper[3985]: E0313 01:10:27.166674 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfd37a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.27860793 +0000 UTC m=+1.155288174,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.171357 master-0 kubenswrapper[3985]: E0313 01:10:27.171288 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe06c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.278641296 +0000 UTC m=+1.155321540,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.177237 master-0 kubenswrapper[3985]: E0313 01:10:27.177123 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe3e22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe3e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,LastTimestamp:2026-03-13 01:10:15.278657774 +0000 UTC m=+1.155338028,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.182090 master-0 kubenswrapper[3985]: E0313 01:10:27.182020 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfd37a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.280389806 +0000 UTC m=+1.157070050,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.186550 master-0 kubenswrapper[3985]: E0313 01:10:27.185402 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe06c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.280412344 +0000 UTC m=+1.157092588,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.191258 master-0 kubenswrapper[3985]: E0313 01:10:27.191056 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe3e22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe3e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,LastTimestamp:2026-03-13 01:10:15.280462008 +0000 UTC m=+1.157142252,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.196844 master-0 kubenswrapper[3985]: E0313 01:10:27.196739 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfd37a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.280497503 +0000 UTC m=+1.157177747,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.201605 master-0 kubenswrapper[3985]: E0313 01:10:27.201423 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe06c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.280551338 +0000 UTC m=+1.157231582,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.214205 master-0 kubenswrapper[3985]: E0313 01:10:27.214044 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe3e22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe3e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,LastTimestamp:2026-03-13 01:10:15.280568576 +0000 UTC m=+1.157248830,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.218757 master-0 kubenswrapper[3985]: E0313 01:10:27.218662 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfd37a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.282066296 +0000 UTC m=+1.158746550,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.222789 master-0 kubenswrapper[3985]: E0313 01:10:27.222694 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfd37a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.282079775 +0000 UTC m=+1.158760019,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.226358 master-0 kubenswrapper[3985]: E0313 01:10:27.226251 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe06c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.282095923 +0000 UTC m=+1.158776177,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.229861 master-0 kubenswrapper[3985]: E0313 01:10:27.229785 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe06c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.282102592 +0000 UTC m=+1.158782836,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.252541 master-0 kubenswrapper[3985]: E0313 01:10:27.249865 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe3e22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe3e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,LastTimestamp:2026-03-13 01:10:15.28212125 +0000 UTC m=+1.158801504,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.260067 master-0 kubenswrapper[3985]: I0313 01:10:27.259101 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d"} Mar 13 01:10:27.260067 master-0 kubenswrapper[3985]: I0313 01:10:27.259254 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:27.260067 master-0 kubenswrapper[3985]: I0313 01:10:27.260024 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:27.260067 master-0 kubenswrapper[3985]: I0313 01:10:27.260072 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:27.260067 master-0 kubenswrapper[3985]: I0313 01:10:27.260086 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:27.260581 master-0 kubenswrapper[3985]: I0313 01:10:27.260547 3985 scope.go:117] "RemoveContainer" containerID="bbc1eef4848241d60b2e14297f83c2738656d477e9aca36b48290bd2306fa11f" Mar 13 01:10:27.263206 master-0 kubenswrapper[3985]: E0313 01:10:27.263094 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe3e22\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe3e22 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node master-0 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025647138 +0000 UTC m=+0.902327362,LastTimestamp:2026-03-13 01:10:15.282238395 +0000 UTC m=+1.158918639,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.271142 master-0 kubenswrapper[3985]: E0313 01:10:27.271067 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfd37a0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfd37a0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node master-0 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.025579936 +0000 UTC m=+0.902260160,LastTimestamp:2026-03-13 01:10:15.283312516 +0000 UTC m=+1.159992760,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.283391 master-0 kubenswrapper[3985]: E0313 01:10:27.283249 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"master-0.189c415b9dfe06c0\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{master-0.189c415b9dfe06c0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:master-0,UID:master-0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node master-0 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:15.02563296 +0000 UTC m=+0.902313184,LastTimestamp:2026-03-13 01:10:15.283349163 +0000 UTC m=+1.160029407,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.289339 master-0 kubenswrapper[3985]: E0313 01:10:27.289178 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c415bf2d7281a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:16.449148954 +0000 UTC m=+2.325829219,LastTimestamp:2026-03-13 01:10:16.449148954 +0000 UTC m=+2.325829219,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.294885 master-0 kubenswrapper[3985]: E0313 01:10:27.294756 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415bf3e590d0 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:16.46687048 +0000 UTC m=+2.343550724,LastTimestamp:2026-03-13 01:10:16.46687048 +0000 UTC m=+2.343550724,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.317727 master-0 kubenswrapper[3985]: E0313 01:10:27.317103 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415bf62ff25d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:16.505299549 +0000 UTC m=+2.381979793,LastTimestamp:2026-03-13 01:10:16.505299549 +0000 UTC m=+2.381979793,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.327957 master-0 kubenswrapper[3985]: E0313 01:10:27.327823 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415bf9623656 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:16.558925398 +0000 UTC m=+2.435605682,LastTimestamp:2026-03-13 01:10:16.558925398 +0000 UTC m=+2.435605682,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.335204 master-0 kubenswrapper[3985]: E0313 01:10:27.334953 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c415bfbbe1f12 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:16.598503186 +0000 UTC m=+2.475183440,LastTimestamp:2026-03-13 01:10:16.598503186 +0000 UTC m=+2.475183440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.342698 master-0 kubenswrapper[3985]: E0313 01:10:27.342490 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415c94e16ce0 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" in 2.608s (2.608s including waiting). Image size: 465086330 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.167730912 +0000 UTC m=+5.044411126,LastTimestamp:2026-03-13 01:10:19.167730912 +0000 UTC m=+5.044411126,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.350091 master-0 kubenswrapper[3985]: E0313 01:10:27.349877 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c415c96c1da0f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" in 2.75s (2.75s including waiting). Image size: 529324693 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.199216143 +0000 UTC m=+5.075896357,LastTimestamp:2026-03-13 01:10:19.199216143 +0000 UTC m=+5.075896357,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.355868 master-0 kubenswrapper[3985]: E0313 01:10:27.355433 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415ca2d47a05 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.401763333 +0000 UTC m=+5.278443547,LastTimestamp:2026-03-13 01:10:19.401763333 +0000 UTC m=+5.278443547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.361523 master-0 kubenswrapper[3985]: E0313 01:10:27.361033 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c415ca2de34fc openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.40240102 +0000 UTC m=+5.279081234,LastTimestamp:2026-03-13 01:10:19.40240102 +0000 UTC m=+5.279081234,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.365130 master-0 kubenswrapper[3985]: E0313 01:10:27.364915 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415ca3f7e02e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.420860462 +0000 UTC m=+5.297540676,LastTimestamp:2026-03-13 01:10:19.420860462 +0000 UTC m=+5.297540676,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.369385 master-0 kubenswrapper[3985]: E0313 01:10:27.369105 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c415ca450bed5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.426684629 +0000 UTC m=+5.303364843,LastTimestamp:2026-03-13 01:10:19.426684629 +0000 UTC m=+5.303364843,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.374297 master-0 kubenswrapper[3985]: E0313 01:10:27.373999 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c415ca47c9bec openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.429559276 +0000 UTC m=+5.306239490,LastTimestamp:2026-03-13 01:10:19.429559276 +0000 UTC m=+5.306239490,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.378863 master-0 kubenswrapper[3985]: E0313 01:10:27.378785 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c415cb0bb544e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.634996302 +0000 UTC m=+5.511676516,LastTimestamp:2026-03-13 01:10:19.634996302 +0000 UTC m=+5.511676516,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.382950 master-0 kubenswrapper[3985]: E0313 01:10:27.382823 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c415cb1e3e51b openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:19.654432027 +0000 UTC m=+5.531112241,LastTimestamp:2026-03-13 01:10:19.654432027 +0000 UTC m=+5.531112241,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.386980 master-0 kubenswrapper[3985]: E0313 01:10:27.386866 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cd35d4994 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.216035732 +0000 UTC m=+6.092715936,LastTimestamp:2026-03-13 01:10:20.216035732 +0000 UTC m=+6.092715936,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.391047 master-0 kubenswrapper[3985]: E0313 01:10:27.390826 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cf158907e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.719042686 +0000 UTC m=+6.595722900,LastTimestamp:2026-03-13 01:10:20.719042686 +0000 UTC m=+6.595722900,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.394460 master-0 kubenswrapper[3985]: E0313 01:10:27.394395 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cfe11a16f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.932497775 +0000 UTC m=+6.809177979,LastTimestamp:2026-03-13 01:10:20.932497775 +0000 UTC m=+6.809177979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.399187 master-0 kubenswrapper[3985]: E0313 01:10:27.399046 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415cd35d4994\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cd35d4994 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.216035732 +0000 UTC m=+6.092715936,LastTimestamp:2026-03-13 01:10:21.224163792 +0000 UTC m=+7.100843996,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.404023 master-0 kubenswrapper[3985]: E0313 01:10:27.403866 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415dbfc3e645 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.715s (7.715s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.182183493 +0000 UTC m=+10.058863717,LastTimestamp:2026-03-13 01:10:24.182183493 +0000 UTC m=+10.058863717,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.407959 master-0 kubenswrapper[3985]: E0313 01:10:27.407803 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415dc144bcb0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.702s (7.702s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.207404208 +0000 UTC m=+10.084084422,LastTimestamp:2026-03-13 01:10:24.207404208 +0000 UTC m=+10.084084422,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.434527 master-0 kubenswrapper[3985]: E0313 01:10:27.432969 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c415dc272ea61 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" in 7.628s (7.628s including waiting). Image size: 943837171 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.227207777 +0000 UTC m=+10.103888011,LastTimestamp:2026-03-13 01:10:24.227207777 +0000 UTC m=+10.103888011,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.446931 master-0 kubenswrapper[3985]: E0313 01:10:27.446797 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415cf158907e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cf158907e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.719042686 +0000 UTC m=+6.595722900,LastTimestamp:2026-03-13 01:10:24.32582655 +0000 UTC m=+10.202506764,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.458587 master-0 kubenswrapper[3985]: E0313 01:10:27.458170 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415cfe11a16f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cfe11a16f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.932497775 +0000 UTC m=+6.809177979,LastTimestamp:2026-03-13 01:10:24.345642229 +0000 UTC m=+10.222322453,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.463072 master-0 kubenswrapper[3985]: E0313 01:10:27.462960 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415dceebe782 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.43646349 +0000 UTC m=+10.313143704,LastTimestamp:2026-03-13 01:10:24.43646349 +0000 UTC m=+10.313143704,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.468295 master-0 kubenswrapper[3985]: E0313 01:10:27.468123 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415dcf9219cc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.44735534 +0000 UTC m=+10.324035564,LastTimestamp:2026-03-13 01:10:24.44735534 +0000 UTC m=+10.324035564,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.474885 master-0 kubenswrapper[3985]: E0313 01:10:27.474729 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415dcfa76e68 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.448753256 +0000 UTC m=+10.325433470,LastTimestamp:2026-03-13 01:10:24.448753256 +0000 UTC m=+10.325433470,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.481465 master-0 kubenswrapper[3985]: E0313 01:10:27.481330 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c415dd59b7e98 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.548634264 +0000 UTC m=+10.425314518,LastTimestamp:2026-03-13 01:10:24.548634264 +0000 UTC m=+10.425314518,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.486087 master-0 kubenswrapper[3985]: E0313 01:10:27.485991 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415dd5bf7881 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.550992001 +0000 UTC m=+10.427672215,LastTimestamp:2026-03-13 01:10:24.550992001 +0000 UTC m=+10.427672215,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.490262 master-0 kubenswrapper[3985]: E0313 01:10:27.490174 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-scheduler-master-0.189c415dd6942ff8 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-scheduler-master-0,UID:a1a56802af72ce1aac6b5077f1695ac0,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.5649326 +0000 UTC m=+10.441612864,LastTimestamp:2026-03-13 01:10:24.5649326 +0000 UTC m=+10.441612864,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.494330 master-0 kubenswrapper[3985]: E0313 01:10:27.494255 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415dd6d44b67 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.569133927 +0000 UTC m=+10.445814141,LastTimestamp:2026-03-13 01:10:24.569133927 +0000 UTC m=+10.445814141,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.498713 master-0 kubenswrapper[3985]: E0313 01:10:27.498569 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415dfee8ad19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:25.241558297 +0000 UTC m=+11.118238531,LastTimestamp:2026-03-13 01:10:25.241558297 +0000 UTC m=+11.118238531,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.504004 master-0 kubenswrapper[3985]: E0313 01:10:27.503824 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415dff4f4b78 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:25.248283512 +0000 UTC m=+11.124963736,LastTimestamp:2026-03-13 01:10:25.248283512 +0000 UTC m=+11.124963736,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.515124 master-0 kubenswrapper[3985]: E0313 01:10:27.514972 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415e0cf66af5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:25.477339893 +0000 UTC m=+11.354020107,LastTimestamp:2026-03-13 01:10:25.477339893 +0000 UTC m=+11.354020107,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.520974 master-0 kubenswrapper[3985]: E0313 01:10:27.520894 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415e0dcb4311 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:25.491288849 +0000 UTC m=+11.367969063,LastTimestamp:2026-03-13 01:10:25.491288849 +0000 UTC m=+11.367969063,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.527358 master-0 kubenswrapper[3985]: E0313 01:10:27.527180 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415e0de1141b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulling,Message:Pulling image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\",Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:25.492718619 +0000 UTC m=+11.369398863,LastTimestamp:2026-03-13 01:10:25.492718619 +0000 UTC m=+11.369398863,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.534877 master-0 kubenswrapper[3985]: E0313 01:10:27.534728 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415dfee8ad19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415dfee8ad19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:25.241558297 +0000 UTC m=+11.118238531,LastTimestamp:2026-03-13 01:10:26.255982824 +0000 UTC m=+12.132663038,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.570985 master-0 kubenswrapper[3985]: E0313 01:10:27.570868 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415e51265d7f kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\" in 2.172s (2.172s including waiting). Image size: 505242594 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:26.621332863 +0000 UTC m=+12.498013097,LastTimestamp:2026-03-13 01:10:26.621332863 +0000 UTC m=+12.498013097,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.581947 master-0 kubenswrapper[3985]: E0313 01:10:27.581892 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 01:10:27.582389 master-0 kubenswrapper[3985]: E0313 01:10:27.582279 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415e5e0fe3e6 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:26.83796375 +0000 UTC m=+12.714643964,LastTimestamp:2026-03-13 01:10:26.83796375 +0000 UTC m=+12.714643964,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.586183 master-0 kubenswrapper[3985]: E0313 01:10:27.586105 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415e5ec49f26 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:26.849808166 +0000 UTC m=+12.726488380,LastTimestamp:2026-03-13 01:10:26.849808166 +0000 UTC m=+12.726488380,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.591313 master-0 kubenswrapper[3985]: E0313 01:10:27.591226 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415e7764081e kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:27.262908446 +0000 UTC m=+13.139588650,LastTimestamp:2026-03-13 01:10:27.262908446 +0000 UTC m=+13.139588650,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.595634 master-0 kubenswrapper[3985]: E0313 01:10:27.595383 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c415dceebe782\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415dceebe782 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.43646349 +0000 UTC m=+10.313143704,LastTimestamp:2026-03-13 01:10:27.475922413 +0000 UTC m=+13.352602627,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.600772 master-0 kubenswrapper[3985]: E0313 01:10:27.600693 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"bootstrap-kube-controller-manager-master-0.189c415dcf9219cc\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"kube-system\"" event="&Event{ObjectMeta:{bootstrap-kube-controller-manager-master-0.189c415dcf9219cc kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:bootstrap-kube-controller-manager-master-0,UID:f78c05e1499b533b83f091333d61f045,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:24.44735534 +0000 UTC m=+10.324035564,LastTimestamp:2026-03-13 01:10:27.49071219 +0000 UTC m=+13.367392404,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:27.890681 master-0 kubenswrapper[3985]: I0313 01:10:27.890487 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:27.892412 master-0 kubenswrapper[3985]: I0313 01:10:27.892364 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:27.892483 master-0 kubenswrapper[3985]: I0313 01:10:27.892421 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:27.892483 master-0 kubenswrapper[3985]: I0313 01:10:27.892444 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:27.892589 master-0 kubenswrapper[3985]: I0313 01:10:27.892553 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:27.901311 master-0 kubenswrapper[3985]: E0313 01:10:27.901245 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 01:10:27.974551 master-0 kubenswrapper[3985]: I0313 01:10:27.968244 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:28.267860 master-0 kubenswrapper[3985]: I0313 01:10:28.267803 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff"} Mar 13 01:10:28.268595 master-0 kubenswrapper[3985]: I0313 01:10:28.267960 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:28.269440 master-0 kubenswrapper[3985]: I0313 01:10:28.269391 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:28.269485 master-0 kubenswrapper[3985]: I0313 01:10:28.269452 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:28.269485 master-0 kubenswrapper[3985]: I0313 01:10:28.269466 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:28.565656 master-0 kubenswrapper[3985]: I0313 01:10:28.565466 3985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:28.572916 master-0 kubenswrapper[3985]: I0313 01:10:28.572847 3985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:28.887411 master-0 kubenswrapper[3985]: E0313 01:10:28.887230 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415ed7c19168 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\" in 3.386s (3.386s including waiting). Image size: 514980169 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:28.879651176 +0000 UTC m=+14.756331430,LastTimestamp:2026-03-13 01:10:28.879651176 +0000 UTC m=+14.756331430,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:28.968652 master-0 kubenswrapper[3985]: I0313 01:10:28.968568 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:29.109526 master-0 kubenswrapper[3985]: E0313 01:10:29.109359 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415ee51d9854 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:29.103786068 +0000 UTC m=+14.980466282,LastTimestamp:2026-03-13 01:10:29.103786068 +0000 UTC m=+14.980466282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:29.118761 master-0 kubenswrapper[3985]: E0313 01:10:29.118540 3985 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{bootstrap-kube-apiserver-master-0.189c415ee5af06a8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:bootstrap-kube-apiserver-master-0,UID:5f77c8e18b751d90bc0dfe2d4e304050,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:29.113317032 +0000 UTC m=+14.989997266,LastTimestamp:2026-03-13 01:10:29.113317032 +0000 UTC m=+14.989997266,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:29.273821 master-0 kubenswrapper[3985]: I0313 01:10:29.273555 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"8df1059c68299a3330235cc4d111397a59bfb0c4b40d95af664427109c129231"} Mar 13 01:10:29.273821 master-0 kubenswrapper[3985]: I0313 01:10:29.273598 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:29.273821 master-0 kubenswrapper[3985]: I0313 01:10:29.273818 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:29.274931 master-0 kubenswrapper[3985]: I0313 01:10:29.273598 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:29.275248 master-0 kubenswrapper[3985]: I0313 01:10:29.275212 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:29.275248 master-0 kubenswrapper[3985]: I0313 01:10:29.275246 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:29.275248 master-0 kubenswrapper[3985]: I0313 01:10:29.275257 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:29.275658 master-0 kubenswrapper[3985]: I0313 01:10:29.275611 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:29.276296 master-0 kubenswrapper[3985]: I0313 01:10:29.275816 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:29.276296 master-0 kubenswrapper[3985]: I0313 01:10:29.275859 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:29.698071 master-0 kubenswrapper[3985]: I0313 01:10:29.697866 3985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Mar 13 01:10:29.721174 master-0 kubenswrapper[3985]: I0313 01:10:29.721121 3985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 01:10:29.968075 master-0 kubenswrapper[3985]: I0313 01:10:29.967881 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:30.276037 master-0 kubenswrapper[3985]: I0313 01:10:30.275963 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:30.277140 master-0 kubenswrapper[3985]: I0313 01:10:30.275989 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:30.277387 master-0 kubenswrapper[3985]: I0313 01:10:30.277330 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:30.277387 master-0 kubenswrapper[3985]: I0313 01:10:30.277375 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:30.277387 master-0 kubenswrapper[3985]: I0313 01:10:30.277393 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:30.277654 master-0 kubenswrapper[3985]: I0313 01:10:30.277393 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:30.277654 master-0 kubenswrapper[3985]: I0313 01:10:30.277453 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:30.277654 master-0 kubenswrapper[3985]: I0313 01:10:30.277474 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:30.968392 master-0 kubenswrapper[3985]: I0313 01:10:30.968260 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:31.217131 master-0 kubenswrapper[3985]: W0313 01:10:31.217033 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 13 01:10:31.217131 master-0 kubenswrapper[3985]: E0313 01:10:31.217127 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 01:10:31.371697 master-0 kubenswrapper[3985]: I0313 01:10:31.371609 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:31.372653 master-0 kubenswrapper[3985]: I0313 01:10:31.371866 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:31.373618 master-0 kubenswrapper[3985]: I0313 01:10:31.373563 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:31.373741 master-0 kubenswrapper[3985]: I0313 01:10:31.373627 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:31.373741 master-0 kubenswrapper[3985]: I0313 01:10:31.373647 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:31.574800 master-0 kubenswrapper[3985]: I0313 01:10:31.574694 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:31.575148 master-0 kubenswrapper[3985]: I0313 01:10:31.574966 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:31.576892 master-0 kubenswrapper[3985]: I0313 01:10:31.576842 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:31.577096 master-0 kubenswrapper[3985]: I0313 01:10:31.577072 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:31.577311 master-0 kubenswrapper[3985]: I0313 01:10:31.577288 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:31.967107 master-0 kubenswrapper[3985]: I0313 01:10:31.966984 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:32.006875 master-0 kubenswrapper[3985]: I0313 01:10:32.006798 3985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:32.016254 master-0 kubenswrapper[3985]: I0313 01:10:32.016181 3985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:32.141874 master-0 kubenswrapper[3985]: W0313 01:10:32.141800 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 13 01:10:32.142096 master-0 kubenswrapper[3985]: E0313 01:10:32.141887 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 01:10:32.282705 master-0 kubenswrapper[3985]: I0313 01:10:32.282596 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:32.284266 master-0 kubenswrapper[3985]: I0313 01:10:32.284162 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:32.284390 master-0 kubenswrapper[3985]: I0313 01:10:32.284275 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:32.284390 master-0 kubenswrapper[3985]: I0313 01:10:32.284306 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:32.294346 master-0 kubenswrapper[3985]: I0313 01:10:32.294262 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:10:32.327746 master-0 kubenswrapper[3985]: W0313 01:10:32.327661 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "master-0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 13 01:10:32.328003 master-0 kubenswrapper[3985]: E0313 01:10:32.327751 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"master-0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Mar 13 01:10:32.968151 master-0 kubenswrapper[3985]: I0313 01:10:32.968082 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:33.286245 master-0 kubenswrapper[3985]: I0313 01:10:33.286171 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:33.287438 master-0 kubenswrapper[3985]: I0313 01:10:33.287378 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:33.287438 master-0 kubenswrapper[3985]: I0313 01:10:33.287432 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:33.287663 master-0 kubenswrapper[3985]: I0313 01:10:33.287451 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:33.966198 master-0 kubenswrapper[3985]: I0313 01:10:33.966020 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:34.289573 master-0 kubenswrapper[3985]: I0313 01:10:34.289226 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:34.290585 master-0 kubenswrapper[3985]: I0313 01:10:34.290413 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:34.290585 master-0 kubenswrapper[3985]: I0313 01:10:34.290471 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:34.290585 master-0 kubenswrapper[3985]: I0313 01:10:34.290489 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:34.473085 master-0 kubenswrapper[3985]: W0313 01:10:34.472951 3985 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:34.473085 master-0 kubenswrapper[3985]: E0313 01:10:34.473023 3985 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Mar 13 01:10:34.591549 master-0 kubenswrapper[3985]: E0313 01:10:34.591347 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 01:10:34.902249 master-0 kubenswrapper[3985]: I0313 01:10:34.901890 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:34.905250 master-0 kubenswrapper[3985]: I0313 01:10:34.904120 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:34.905250 master-0 kubenswrapper[3985]: I0313 01:10:34.904204 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:34.905250 master-0 kubenswrapper[3985]: I0313 01:10:34.904224 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:34.905250 master-0 kubenswrapper[3985]: I0313 01:10:34.904331 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:34.915258 master-0 kubenswrapper[3985]: E0313 01:10:34.913651 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 01:10:34.968474 master-0 kubenswrapper[3985]: I0313 01:10:34.968402 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:35.063043 master-0 kubenswrapper[3985]: I0313 01:10:35.062967 3985 csr.go:261] certificate signing request csr-cnnr4 is approved, waiting to be issued Mar 13 01:10:35.170277 master-0 kubenswrapper[3985]: E0313 01:10:35.170050 3985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 01:10:35.730711 master-0 kubenswrapper[3985]: I0313 01:10:35.730617 3985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:35.731650 master-0 kubenswrapper[3985]: I0313 01:10:35.730890 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:35.732601 master-0 kubenswrapper[3985]: I0313 01:10:35.732505 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:35.732903 master-0 kubenswrapper[3985]: I0313 01:10:35.732615 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:35.732903 master-0 kubenswrapper[3985]: I0313 01:10:35.732631 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:35.738810 master-0 kubenswrapper[3985]: I0313 01:10:35.738754 3985 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:35.969341 master-0 kubenswrapper[3985]: I0313 01:10:35.969271 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:36.296458 master-0 kubenswrapper[3985]: I0313 01:10:36.296381 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:36.297916 master-0 kubenswrapper[3985]: I0313 01:10:36.297858 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:36.297916 master-0 kubenswrapper[3985]: I0313 01:10:36.297901 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:36.297916 master-0 kubenswrapper[3985]: I0313 01:10:36.297917 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:36.302046 master-0 kubenswrapper[3985]: I0313 01:10:36.302000 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:36.968186 master-0 kubenswrapper[3985]: I0313 01:10:36.968074 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:37.299153 master-0 kubenswrapper[3985]: I0313 01:10:37.299055 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:37.300696 master-0 kubenswrapper[3985]: I0313 01:10:37.300628 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:37.300839 master-0 kubenswrapper[3985]: I0313 01:10:37.300713 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:37.300839 master-0 kubenswrapper[3985]: I0313 01:10:37.300740 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:37.306034 master-0 kubenswrapper[3985]: I0313 01:10:37.305972 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:10:37.970784 master-0 kubenswrapper[3985]: I0313 01:10:37.970644 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:38.302125 master-0 kubenswrapper[3985]: I0313 01:10:38.302048 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:38.303227 master-0 kubenswrapper[3985]: I0313 01:10:38.303175 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:38.303463 master-0 kubenswrapper[3985]: I0313 01:10:38.303277 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:38.303463 master-0 kubenswrapper[3985]: I0313 01:10:38.303322 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:38.967928 master-0 kubenswrapper[3985]: I0313 01:10:38.967855 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:39.968787 master-0 kubenswrapper[3985]: I0313 01:10:39.968684 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:40.969557 master-0 kubenswrapper[3985]: I0313 01:10:40.969428 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:41.177236 master-0 kubenswrapper[3985]: I0313 01:10:41.177170 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:41.178582 master-0 kubenswrapper[3985]: I0313 01:10:41.178546 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:41.178723 master-0 kubenswrapper[3985]: I0313 01:10:41.178601 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:41.178723 master-0 kubenswrapper[3985]: I0313 01:10:41.178619 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:41.179144 master-0 kubenswrapper[3985]: I0313 01:10:41.179098 3985 scope.go:117] "RemoveContainer" containerID="fce486a6b99328c3c89e02cae5671347b19080082ab67d0e6384e847054c54b8" Mar 13 01:10:41.191743 master-0 kubenswrapper[3985]: E0313 01:10:41.191434 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415cd35d4994\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cd35d4994 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.216035732 +0000 UTC m=+6.092715936,LastTimestamp:2026-03-13 01:10:41.183349084 +0000 UTC m=+27.060029298,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:41.470143 master-0 kubenswrapper[3985]: E0313 01:10:41.469948 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415cf158907e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cf158907e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.719042686 +0000 UTC m=+6.595722900,LastTimestamp:2026-03-13 01:10:41.461342703 +0000 UTC m=+27.338022957,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:41.487821 master-0 kubenswrapper[3985]: E0313 01:10:41.487585 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415cfe11a16f\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415cfe11a16f openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:20.932497775 +0000 UTC m=+6.809177979,LastTimestamp:2026-03-13 01:10:41.478656347 +0000 UTC m=+27.355336601,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:41.598255 master-0 kubenswrapper[3985]: E0313 01:10:41.598156 3985 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"master-0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Mar 13 01:10:41.914506 master-0 kubenswrapper[3985]: I0313 01:10:41.914337 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:41.916981 master-0 kubenswrapper[3985]: I0313 01:10:41.916927 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:41.917121 master-0 kubenswrapper[3985]: I0313 01:10:41.916992 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:41.917121 master-0 kubenswrapper[3985]: I0313 01:10:41.917015 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:41.917121 master-0 kubenswrapper[3985]: I0313 01:10:41.917106 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:41.925219 master-0 kubenswrapper[3985]: E0313 01:10:41.925127 3985 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="master-0" Mar 13 01:10:41.968239 master-0 kubenswrapper[3985]: I0313 01:10:41.968073 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:42.318228 master-0 kubenswrapper[3985]: I0313 01:10:42.318177 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 01:10:42.319709 master-0 kubenswrapper[3985]: I0313 01:10:42.319655 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/1.log" Mar 13 01:10:42.320872 master-0 kubenswrapper[3985]: I0313 01:10:42.320817 3985 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c" exitCode=1 Mar 13 01:10:42.320986 master-0 kubenswrapper[3985]: I0313 01:10:42.320878 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c"} Mar 13 01:10:42.320986 master-0 kubenswrapper[3985]: I0313 01:10:42.320935 3985 scope.go:117] "RemoveContainer" containerID="fce486a6b99328c3c89e02cae5671347b19080082ab67d0e6384e847054c54b8" Mar 13 01:10:42.321251 master-0 kubenswrapper[3985]: I0313 01:10:42.321195 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:42.323357 master-0 kubenswrapper[3985]: I0313 01:10:42.322882 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:42.323357 master-0 kubenswrapper[3985]: I0313 01:10:42.322995 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:42.323357 master-0 kubenswrapper[3985]: I0313 01:10:42.323015 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:42.323832 master-0 kubenswrapper[3985]: I0313 01:10:42.323781 3985 scope.go:117] "RemoveContainer" containerID="6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c" Mar 13 01:10:42.324268 master-0 kubenswrapper[3985]: E0313 01:10:42.324210 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 01:10:42.333912 master-0 kubenswrapper[3985]: E0313 01:10:42.333738 3985 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-rbac-proxy-crio-master-0.189c415dfee8ad19\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-master-0.189c415dfee8ad19 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-master-0,UID:e9add8df47182fc2eaf8cd78016ebe72,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:BackOff,Message:Back-off restarting failed container kube-rbac-proxy-crio in pod kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72),Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:10:25.241558297 +0000 UTC m=+11.118238531,LastTimestamp:2026-03-13 01:10:42.324125809 +0000 UTC m=+28.200806053,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:10:42.968177 master-0 kubenswrapper[3985]: I0313 01:10:42.968087 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:43.327295 master-0 kubenswrapper[3985]: I0313 01:10:43.326853 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 01:10:43.966113 master-0 kubenswrapper[3985]: I0313 01:10:43.965959 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:44.966721 master-0 kubenswrapper[3985]: I0313 01:10:44.966585 3985 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "master-0" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Mar 13 01:10:45.171254 master-0 kubenswrapper[3985]: E0313 01:10:45.171164 3985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 01:10:45.320820 master-0 kubenswrapper[3985]: I0313 01:10:45.320761 3985 csr.go:257] certificate signing request csr-cnnr4 is issued Mar 13 01:10:45.825640 master-0 kubenswrapper[3985]: I0313 01:10:45.825555 3985 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 13 01:10:45.970400 master-0 kubenswrapper[3985]: I0313 01:10:45.970305 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:45.987178 master-0 kubenswrapper[3985]: I0313 01:10:45.987089 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.043915 master-0 kubenswrapper[3985]: I0313 01:10:46.043828 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.304087 master-0 kubenswrapper[3985]: I0313 01:10:46.303999 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.304087 master-0 kubenswrapper[3985]: E0313 01:10:46.304048 3985 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 01:10:46.322737 master-0 kubenswrapper[3985]: I0313 01:10:46.322589 3985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 01:02:11 +0000 UTC, rotation deadline is 2026-03-13 20:17:57.46980787 +0000 UTC Mar 13 01:10:46.322737 master-0 kubenswrapper[3985]: I0313 01:10:46.322662 3985 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h7m11.147151563s for next certificate rotation Mar 13 01:10:46.326671 master-0 kubenswrapper[3985]: I0313 01:10:46.326607 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.343244 master-0 kubenswrapper[3985]: I0313 01:10:46.343157 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.407968 master-0 kubenswrapper[3985]: I0313 01:10:46.407898 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.683226 master-0 kubenswrapper[3985]: I0313 01:10:46.683065 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.683226 master-0 kubenswrapper[3985]: E0313 01:10:46.683110 3985 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 01:10:46.786842 master-0 kubenswrapper[3985]: I0313 01:10:46.786690 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.803304 master-0 kubenswrapper[3985]: I0313 01:10:46.803218 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:46.863493 master-0 kubenswrapper[3985]: I0313 01:10:46.863408 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:47.134087 master-0 kubenswrapper[3985]: I0313 01:10:47.133993 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:47.134087 master-0 kubenswrapper[3985]: E0313 01:10:47.134071 3985 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 01:10:47.691381 master-0 kubenswrapper[3985]: I0313 01:10:47.691288 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:47.709670 master-0 kubenswrapper[3985]: I0313 01:10:47.709608 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:47.769270 master-0 kubenswrapper[3985]: I0313 01:10:47.769149 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:48.051736 master-0 kubenswrapper[3985]: I0313 01:10:48.051631 3985 nodeinfomanager.go:401] Failed to publish CSINode: nodes "master-0" not found Mar 13 01:10:48.051736 master-0 kubenswrapper[3985]: E0313 01:10:48.051691 3985 csi_plugin.go:305] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "master-0" not found Mar 13 01:10:48.604702 master-0 kubenswrapper[3985]: E0313 01:10:48.604566 3985 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"master-0\" not found" node="master-0" Mar 13 01:10:48.887403 master-0 kubenswrapper[3985]: I0313 01:10:48.887246 3985 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 01:10:48.926031 master-0 kubenswrapper[3985]: I0313 01:10:48.925949 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:48.927470 master-0 kubenswrapper[3985]: I0313 01:10:48.927424 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:48.927470 master-0 kubenswrapper[3985]: I0313 01:10:48.927472 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:48.927687 master-0 kubenswrapper[3985]: I0313 01:10:48.927487 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:48.927687 master-0 kubenswrapper[3985]: I0313 01:10:48.927569 3985 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:10:48.937849 master-0 kubenswrapper[3985]: I0313 01:10:48.937774 3985 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 01:10:48.937849 master-0 kubenswrapper[3985]: E0313 01:10:48.937804 3985 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": node \"master-0\" not found" Mar 13 01:10:48.951226 master-0 kubenswrapper[3985]: E0313 01:10:48.951193 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:48.985505 master-0 kubenswrapper[3985]: I0313 01:10:48.985412 3985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Mar 13 01:10:48.998526 master-0 kubenswrapper[3985]: I0313 01:10:48.998449 3985 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Mar 13 01:10:49.051497 master-0 kubenswrapper[3985]: E0313 01:10:49.051392 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.151866 master-0 kubenswrapper[3985]: E0313 01:10:49.151670 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.252357 master-0 kubenswrapper[3985]: E0313 01:10:49.252270 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.353269 master-0 kubenswrapper[3985]: E0313 01:10:49.353174 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.454402 master-0 kubenswrapper[3985]: E0313 01:10:49.454175 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.554573 master-0 kubenswrapper[3985]: E0313 01:10:49.554409 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.655379 master-0 kubenswrapper[3985]: E0313 01:10:49.655282 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.756555 master-0 kubenswrapper[3985]: E0313 01:10:49.756432 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.857294 master-0 kubenswrapper[3985]: E0313 01:10:49.857206 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:49.957817 master-0 kubenswrapper[3985]: E0313 01:10:49.957686 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.058235 master-0 kubenswrapper[3985]: E0313 01:10:50.057991 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.159137 master-0 kubenswrapper[3985]: E0313 01:10:50.158936 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.259833 master-0 kubenswrapper[3985]: E0313 01:10:50.259747 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.361007 master-0 kubenswrapper[3985]: E0313 01:10:50.360781 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.461945 master-0 kubenswrapper[3985]: E0313 01:10:50.461848 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.562880 master-0 kubenswrapper[3985]: E0313 01:10:50.562721 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.631087 master-0 kubenswrapper[3985]: I0313 01:10:50.630909 3985 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 01:10:50.663548 master-0 kubenswrapper[3985]: E0313 01:10:50.663347 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.764755 master-0 kubenswrapper[3985]: E0313 01:10:50.764612 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.865752 master-0 kubenswrapper[3985]: E0313 01:10:50.865656 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:50.966070 master-0 kubenswrapper[3985]: E0313 01:10:50.965854 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.067105 master-0 kubenswrapper[3985]: E0313 01:10:51.066962 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.167903 master-0 kubenswrapper[3985]: E0313 01:10:51.167791 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.269142 master-0 kubenswrapper[3985]: E0313 01:10:51.269007 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.369264 master-0 kubenswrapper[3985]: E0313 01:10:51.369184 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.470037 master-0 kubenswrapper[3985]: E0313 01:10:51.469909 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.571071 master-0 kubenswrapper[3985]: E0313 01:10:51.570881 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.671370 master-0 kubenswrapper[3985]: E0313 01:10:51.671227 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.772587 master-0 kubenswrapper[3985]: E0313 01:10:51.772453 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.873378 master-0 kubenswrapper[3985]: E0313 01:10:51.873043 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:51.973830 master-0 kubenswrapper[3985]: E0313 01:10:51.973711 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.076572 master-0 kubenswrapper[3985]: E0313 01:10:52.076399 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.176828 master-0 kubenswrapper[3985]: E0313 01:10:52.176608 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.276994 master-0 kubenswrapper[3985]: E0313 01:10:52.276881 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.378183 master-0 kubenswrapper[3985]: E0313 01:10:52.378060 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.479063 master-0 kubenswrapper[3985]: E0313 01:10:52.478971 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.579495 master-0 kubenswrapper[3985]: E0313 01:10:52.579373 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.680414 master-0 kubenswrapper[3985]: E0313 01:10:52.680295 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.781013 master-0 kubenswrapper[3985]: E0313 01:10:52.780760 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.881166 master-0 kubenswrapper[3985]: E0313 01:10:52.881021 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:52.981818 master-0 kubenswrapper[3985]: E0313 01:10:52.981670 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.082139 master-0 kubenswrapper[3985]: E0313 01:10:53.081888 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.183078 master-0 kubenswrapper[3985]: E0313 01:10:53.182960 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.284352 master-0 kubenswrapper[3985]: E0313 01:10:53.284217 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.384976 master-0 kubenswrapper[3985]: E0313 01:10:53.384723 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.486281 master-0 kubenswrapper[3985]: E0313 01:10:53.486104 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.586984 master-0 kubenswrapper[3985]: E0313 01:10:53.586840 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.687868 master-0 kubenswrapper[3985]: E0313 01:10:53.687568 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.788315 master-0 kubenswrapper[3985]: E0313 01:10:53.788217 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.889445 master-0 kubenswrapper[3985]: E0313 01:10:53.889343 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:53.989545 master-0 kubenswrapper[3985]: E0313 01:10:53.989458 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.090142 master-0 kubenswrapper[3985]: E0313 01:10:54.090046 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.191070 master-0 kubenswrapper[3985]: E0313 01:10:54.190971 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.291578 master-0 kubenswrapper[3985]: E0313 01:10:54.291372 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.391954 master-0 kubenswrapper[3985]: E0313 01:10:54.391838 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.493233 master-0 kubenswrapper[3985]: E0313 01:10:54.493102 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.594164 master-0 kubenswrapper[3985]: E0313 01:10:54.593935 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.694787 master-0 kubenswrapper[3985]: E0313 01:10:54.694696 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.795990 master-0 kubenswrapper[3985]: E0313 01:10:54.795871 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.896855 master-0 kubenswrapper[3985]: E0313 01:10:54.896654 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:54.997810 master-0 kubenswrapper[3985]: E0313 01:10:54.997686 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.098665 master-0 kubenswrapper[3985]: E0313 01:10:55.098545 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.172602 master-0 kubenswrapper[3985]: E0313 01:10:55.172349 3985 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"master-0\" not found" Mar 13 01:10:55.176898 master-0 kubenswrapper[3985]: I0313 01:10:55.176842 3985 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:10:55.178579 master-0 kubenswrapper[3985]: I0313 01:10:55.178495 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:10:55.178699 master-0 kubenswrapper[3985]: I0313 01:10:55.178583 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:10:55.178699 master-0 kubenswrapper[3985]: I0313 01:10:55.178603 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:10:55.179336 master-0 kubenswrapper[3985]: I0313 01:10:55.179290 3985 scope.go:117] "RemoveContainer" containerID="6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c" Mar 13 01:10:55.179680 master-0 kubenswrapper[3985]: E0313 01:10:55.179623 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-rbac-proxy-crio\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-rbac-proxy-crio pod=kube-rbac-proxy-crio-master-0_openshift-machine-config-operator(e9add8df47182fc2eaf8cd78016ebe72)\"" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podUID="e9add8df47182fc2eaf8cd78016ebe72" Mar 13 01:10:55.199100 master-0 kubenswrapper[3985]: E0313 01:10:55.199004 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.299669 master-0 kubenswrapper[3985]: E0313 01:10:55.299570 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.400351 master-0 kubenswrapper[3985]: E0313 01:10:55.400271 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.501522 master-0 kubenswrapper[3985]: E0313 01:10:55.501407 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.602424 master-0 kubenswrapper[3985]: E0313 01:10:55.602332 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.703407 master-0 kubenswrapper[3985]: E0313 01:10:55.703287 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.804550 master-0 kubenswrapper[3985]: E0313 01:10:55.804321 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.839936 master-0 kubenswrapper[3985]: I0313 01:10:55.839841 3985 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 01:10:55.905141 master-0 kubenswrapper[3985]: E0313 01:10:55.905053 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:55.933838 master-0 kubenswrapper[3985]: I0313 01:10:55.933754 3985 csr.go:261] certificate signing request csr-49s7b is approved, waiting to be issued Mar 13 01:10:55.945539 master-0 kubenswrapper[3985]: I0313 01:10:55.945469 3985 csr.go:257] certificate signing request csr-49s7b is issued Mar 13 01:10:56.005878 master-0 kubenswrapper[3985]: E0313 01:10:56.005773 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.106281 master-0 kubenswrapper[3985]: E0313 01:10:56.106069 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.207354 master-0 kubenswrapper[3985]: E0313 01:10:56.207235 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.307502 master-0 kubenswrapper[3985]: E0313 01:10:56.307380 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.408082 master-0 kubenswrapper[3985]: E0313 01:10:56.407837 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.508900 master-0 kubenswrapper[3985]: E0313 01:10:56.508774 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.609742 master-0 kubenswrapper[3985]: E0313 01:10:56.609628 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.710611 master-0 kubenswrapper[3985]: E0313 01:10:56.710332 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.811468 master-0 kubenswrapper[3985]: E0313 01:10:56.811348 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.912291 master-0 kubenswrapper[3985]: E0313 01:10:56.912187 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:56.947674 master-0 kubenswrapper[3985]: I0313 01:10:56.947564 3985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 01:02:11 +0000 UTC, rotation deadline is 2026-03-13 19:36:14.09556976 +0000 UTC Mar 13 01:10:56.947674 master-0 kubenswrapper[3985]: I0313 01:10:56.947621 3985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h25m17.147953856s for next certificate rotation Mar 13 01:10:57.013075 master-0 kubenswrapper[3985]: E0313 01:10:57.012927 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:57.114212 master-0 kubenswrapper[3985]: E0313 01:10:57.114098 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:57.214860 master-0 kubenswrapper[3985]: E0313 01:10:57.214711 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:57.316009 master-0 kubenswrapper[3985]: E0313 01:10:57.315816 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:57.416115 master-0 kubenswrapper[3985]: E0313 01:10:57.416006 3985 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:10:57.500411 master-0 kubenswrapper[3985]: I0313 01:10:57.500315 3985 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 01:10:57.948073 master-0 kubenswrapper[3985]: I0313 01:10:57.947965 3985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 01:02:11 +0000 UTC, rotation deadline is 2026-03-13 21:46:31.354721671 +0000 UTC Mar 13 01:10:57.948073 master-0 kubenswrapper[3985]: I0313 01:10:57.948017 3985 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h35m33.406709837s for next certificate rotation Mar 13 01:10:57.983215 master-0 kubenswrapper[3985]: I0313 01:10:57.983127 3985 apiserver.go:52] "Watching apiserver" Mar 13 01:10:57.988617 master-0 kubenswrapper[3985]: I0313 01:10:57.988566 3985 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 01:10:57.988856 master-0 kubenswrapper[3985]: I0313 01:10:57.988782 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/network-operator-7c649bf6d4-4zrk7","assisted-installer/assisted-installer-controller-qztx6","openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs"] Mar 13 01:10:57.989304 master-0 kubenswrapper[3985]: I0313 01:10:57.989254 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:57.989304 master-0 kubenswrapper[3985]: I0313 01:10:57.989291 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:57.989441 master-0 kubenswrapper[3985]: I0313 01:10:57.989413 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992391 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992479 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992486 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"assisted-installer-controller-config" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992491 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992539 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992654 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"openshift-service-ca.crt" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992731 3985 reflector.go:368] Caches populated for *v1.Secret from object-"assisted-installer"/"assisted-installer-controller-secret" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.992897 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 01:10:57.994193 master-0 kubenswrapper[3985]: I0313 01:10:57.993183 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"assisted-installer"/"kube-root-ca.crt" Mar 13 01:10:57.994946 master-0 kubenswrapper[3985]: I0313 01:10:57.994892 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 01:10:58.066345 master-0 kubenswrapper[3985]: I0313 01:10:58.066262 3985 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 01:10:58.122581 master-0 kubenswrapper[3985]: I0313 01:10:58.122442 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-var-run-resolv-conf\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.122581 master-0 kubenswrapper[3985]: I0313 01:10:58.122569 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-sno-bootstrap-files\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.122581 master-0 kubenswrapper[3985]: I0313 01:10:58.122612 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xmqc\" (UniqueName: \"kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.122651 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.122688 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.122721 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-ca-bundle\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.122775 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-resolv-conf\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.122818 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.122856 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.122953 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.123017 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhh8f\" (UniqueName: \"kubernetes.io/projected/19460daa-7d22-4d32-899c-274b86c56a13-kube-api-access-fhh8f\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.123060 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.123139 master-0 kubenswrapper[3985]: I0313 01:10:58.123096 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.224382 master-0 kubenswrapper[3985]: I0313 01:10:58.224144 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhh8f\" (UniqueName: \"kubernetes.io/projected/19460daa-7d22-4d32-899c-274b86c56a13-kube-api-access-fhh8f\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.224382 master-0 kubenswrapper[3985]: I0313 01:10:58.224219 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.224382 master-0 kubenswrapper[3985]: I0313 01:10:58.224257 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.224903 master-0 kubenswrapper[3985]: I0313 01:10:58.224453 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-var-run-resolv-conf\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.224903 master-0 kubenswrapper[3985]: I0313 01:10:58.224748 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.224903 master-0 kubenswrapper[3985]: I0313 01:10:58.224805 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-sno-bootstrap-files\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.224903 master-0 kubenswrapper[3985]: I0313 01:10:58.224846 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xmqc\" (UniqueName: \"kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.224903 master-0 kubenswrapper[3985]: I0313 01:10:58.224886 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.224922 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-ca-bundle\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.224955 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.224989 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-resolv-conf\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.225024 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.225059 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.225097 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-var-run-resolv-conf\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.225150 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.225253 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-ca-bundle\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.225337 master-0 kubenswrapper[3985]: I0313 01:10:58.225266 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-sno-bootstrap-files\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.225955 master-0 kubenswrapper[3985]: E0313 01:10:58.225378 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:10:58.225955 master-0 kubenswrapper[3985]: E0313 01:10:58.225576 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:10:58.725431007 +0000 UTC m=+44.602111251 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:10:58.225955 master-0 kubenswrapper[3985]: I0313 01:10:58.225899 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.225955 master-0 kubenswrapper[3985]: I0313 01:10:58.225948 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.226572 master-0 kubenswrapper[3985]: I0313 01:10:58.226385 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-resolv-conf\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.226696 master-0 kubenswrapper[3985]: I0313 01:10:58.226602 3985 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 01:10:58.227442 master-0 kubenswrapper[3985]: I0313 01:10:58.227302 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.237445 master-0 kubenswrapper[3985]: I0313 01:10:58.237239 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.259694 master-0 kubenswrapper[3985]: I0313 01:10:58.259617 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhh8f\" (UniqueName: \"kubernetes.io/projected/19460daa-7d22-4d32-899c-274b86c56a13-kube-api-access-fhh8f\") pod \"assisted-installer-controller-qztx6\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.259901 master-0 kubenswrapper[3985]: I0313 01:10:58.259836 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.260426 master-0 kubenswrapper[3985]: I0313 01:10:58.260366 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xmqc\" (UniqueName: \"kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.331699 master-0 kubenswrapper[3985]: I0313 01:10:58.331597 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:10:58.345726 master-0 kubenswrapper[3985]: I0313 01:10:58.345671 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:10:58.360395 master-0 kubenswrapper[3985]: W0313 01:10:58.360337 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddc85ce91_b9de_4e9f_a1f7_12ce9887b1dc.slice/crio-bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff WatchSource:0}: Error finding container bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff: Status 404 returned error can't find the container with id bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff Mar 13 01:10:58.373402 master-0 kubenswrapper[3985]: I0313 01:10:58.373331 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" event={"ID":"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc","Type":"ContainerStarted","Data":"bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff"} Mar 13 01:10:58.374870 master-0 kubenswrapper[3985]: I0313 01:10:58.374777 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-qztx6" event={"ID":"19460daa-7d22-4d32-899c-274b86c56a13","Type":"ContainerStarted","Data":"d309d321e2b3c142df3b5753d507bff20af97e5f4ec76c20a22f4d71bfceba91"} Mar 13 01:10:58.730855 master-0 kubenswrapper[3985]: I0313 01:10:58.730779 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:58.731106 master-0 kubenswrapper[3985]: E0313 01:10:58.730986 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:10:58.731106 master-0 kubenswrapper[3985]: E0313 01:10:58.731070 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:10:59.731045508 +0000 UTC m=+45.607725762 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:10:59.742981 master-0 kubenswrapper[3985]: I0313 01:10:59.742900 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:10:59.744123 master-0 kubenswrapper[3985]: E0313 01:10:59.743188 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:10:59.744123 master-0 kubenswrapper[3985]: E0313 01:10:59.743641 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:11:01.743597009 +0000 UTC m=+47.620277263 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:01.757755 master-0 kubenswrapper[3985]: I0313 01:11:01.757672 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:11:01.759165 master-0 kubenswrapper[3985]: E0313 01:11:01.757852 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:01.759165 master-0 kubenswrapper[3985]: E0313 01:11:01.757924 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:11:05.75790075 +0000 UTC m=+51.634580964 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:04.394894 master-0 kubenswrapper[3985]: I0313 01:11:04.394243 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" event={"ID":"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc","Type":"ContainerStarted","Data":"7f4c53a355951175886abfb80eb4256c32b51f0ad7d9c970345c8e4c70d93ccb"} Mar 13 01:11:04.398782 master-0 kubenswrapper[3985]: I0313 01:11:04.398665 3985 generic.go:334] "Generic (PLEG): container finished" podID="19460daa-7d22-4d32-899c-274b86c56a13" containerID="ffc5eb0505bcd1aede3306af3760c2bce7320e07eb88bcd177785bc53255cfa2" exitCode=0 Mar 13 01:11:04.398782 master-0 kubenswrapper[3985]: I0313 01:11:04.398780 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-qztx6" event={"ID":"19460daa-7d22-4d32-899c-274b86c56a13","Type":"ContainerDied","Data":"ffc5eb0505bcd1aede3306af3760c2bce7320e07eb88bcd177785bc53255cfa2"} Mar 13 01:11:04.419182 master-0 kubenswrapper[3985]: I0313 01:11:04.419061 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" podStartSLOduration=10.36393271 podStartE2EDuration="15.419027788s" podCreationTimestamp="2026-03-13 01:10:49 +0000 UTC" firstStartedPulling="2026-03-13 01:10:58.36430705 +0000 UTC m=+44.240987294" lastFinishedPulling="2026-03-13 01:11:03.419402158 +0000 UTC m=+49.296082372" observedRunningTime="2026-03-13 01:11:04.418719714 +0000 UTC m=+50.295399998" watchObservedRunningTime="2026-03-13 01:11:04.419027788 +0000 UTC m=+50.295708012" Mar 13 01:11:05.431373 master-0 kubenswrapper[3985]: I0313 01:11:05.431302 3985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:11:05.529547 master-0 kubenswrapper[3985]: I0313 01:11:05.529421 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-sno-bootstrap-files\") pod \"19460daa-7d22-4d32-899c-274b86c56a13\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " Mar 13 01:11:05.529833 master-0 kubenswrapper[3985]: I0313 01:11:05.529567 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-resolv-conf\") pod \"19460daa-7d22-4d32-899c-274b86c56a13\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " Mar 13 01:11:05.529833 master-0 kubenswrapper[3985]: I0313 01:11:05.529626 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-var-run-resolv-conf\") pod \"19460daa-7d22-4d32-899c-274b86c56a13\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " Mar 13 01:11:05.529833 master-0 kubenswrapper[3985]: I0313 01:11:05.529647 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-sno-bootstrap-files" (OuterVolumeSpecName: "sno-bootstrap-files") pod "19460daa-7d22-4d32-899c-274b86c56a13" (UID: "19460daa-7d22-4d32-899c-274b86c56a13"). InnerVolumeSpecName "sno-bootstrap-files". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:05.529833 master-0 kubenswrapper[3985]: I0313 01:11:05.529693 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhh8f\" (UniqueName: \"kubernetes.io/projected/19460daa-7d22-4d32-899c-274b86c56a13-kube-api-access-fhh8f\") pod \"19460daa-7d22-4d32-899c-274b86c56a13\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " Mar 13 01:11:05.530006 master-0 kubenswrapper[3985]: I0313 01:11:05.529765 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-resolv-conf" (OuterVolumeSpecName: "host-resolv-conf") pod "19460daa-7d22-4d32-899c-274b86c56a13" (UID: "19460daa-7d22-4d32-899c-274b86c56a13"). InnerVolumeSpecName "host-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:05.530006 master-0 kubenswrapper[3985]: I0313 01:11:05.529831 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-var-run-resolv-conf" (OuterVolumeSpecName: "host-var-run-resolv-conf") pod "19460daa-7d22-4d32-899c-274b86c56a13" (UID: "19460daa-7d22-4d32-899c-274b86c56a13"). InnerVolumeSpecName "host-var-run-resolv-conf". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:05.530006 master-0 kubenswrapper[3985]: I0313 01:11:05.529916 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-ca-bundle" (OuterVolumeSpecName: "host-ca-bundle") pod "19460daa-7d22-4d32-899c-274b86c56a13" (UID: "19460daa-7d22-4d32-899c-274b86c56a13"). InnerVolumeSpecName "host-ca-bundle". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:05.530006 master-0 kubenswrapper[3985]: I0313 01:11:05.529868 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-ca-bundle\") pod \"19460daa-7d22-4d32-899c-274b86c56a13\" (UID: \"19460daa-7d22-4d32-899c-274b86c56a13\") " Mar 13 01:11:05.530442 master-0 kubenswrapper[3985]: I0313 01:11:05.530382 3985 reconciler_common.go:293] "Volume detached for volume \"sno-bootstrap-files\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-sno-bootstrap-files\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:05.530442 master-0 kubenswrapper[3985]: I0313 01:11:05.530438 3985 reconciler_common.go:293] "Volume detached for volume \"host-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:05.530565 master-0 kubenswrapper[3985]: I0313 01:11:05.530454 3985 reconciler_common.go:293] "Volume detached for volume \"host-var-run-resolv-conf\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-var-run-resolv-conf\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:05.530565 master-0 kubenswrapper[3985]: I0313 01:11:05.530486 3985 reconciler_common.go:293] "Volume detached for volume \"host-ca-bundle\" (UniqueName: \"kubernetes.io/host-path/19460daa-7d22-4d32-899c-274b86c56a13-host-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:05.535178 master-0 kubenswrapper[3985]: I0313 01:11:05.535127 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19460daa-7d22-4d32-899c-274b86c56a13-kube-api-access-fhh8f" (OuterVolumeSpecName: "kube-api-access-fhh8f") pod "19460daa-7d22-4d32-899c-274b86c56a13" (UID: "19460daa-7d22-4d32-899c-274b86c56a13"). InnerVolumeSpecName "kube-api-access-fhh8f". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:11:05.631230 master-0 kubenswrapper[3985]: I0313 01:11:05.631139 3985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhh8f\" (UniqueName: \"kubernetes.io/projected/19460daa-7d22-4d32-899c-274b86c56a13-kube-api-access-fhh8f\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:05.833134 master-0 kubenswrapper[3985]: I0313 01:11:05.832995 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:11:05.833442 master-0 kubenswrapper[3985]: E0313 01:11:05.833253 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:05.833442 master-0 kubenswrapper[3985]: E0313 01:11:05.833428 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:11:13.83338687 +0000 UTC m=+59.710067114 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:06.407215 master-0 kubenswrapper[3985]: I0313 01:11:06.406490 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="assisted-installer/assisted-installer-controller-qztx6" event={"ID":"19460daa-7d22-4d32-899c-274b86c56a13","Type":"ContainerDied","Data":"d309d321e2b3c142df3b5753d507bff20af97e5f4ec76c20a22f4d71bfceba91"} Mar 13 01:11:06.407215 master-0 kubenswrapper[3985]: I0313 01:11:06.406741 3985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d309d321e2b3c142df3b5753d507bff20af97e5f4ec76c20a22f4d71bfceba91" Mar 13 01:11:06.407215 master-0 kubenswrapper[3985]: I0313 01:11:06.406757 3985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:11:06.458749 master-0 kubenswrapper[3985]: I0313 01:11:06.458668 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/mtu-prober-fzkjs"] Mar 13 01:11:06.459457 master-0 kubenswrapper[3985]: E0313 01:11:06.458793 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:11:06.459457 master-0 kubenswrapper[3985]: I0313 01:11:06.458812 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:11:06.459457 master-0 kubenswrapper[3985]: I0313 01:11:06.458871 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:11:06.459457 master-0 kubenswrapper[3985]: I0313 01:11:06.459181 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fzkjs" Mar 13 01:11:06.638413 master-0 kubenswrapper[3985]: I0313 01:11:06.638308 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cm65\" (UniqueName: \"kubernetes.io/projected/348e0611-5b3c-4238-a571-813fc16057df-kube-api-access-4cm65\") pod \"mtu-prober-fzkjs\" (UID: \"348e0611-5b3c-4238-a571-813fc16057df\") " pod="openshift-network-operator/mtu-prober-fzkjs" Mar 13 01:11:06.739176 master-0 kubenswrapper[3985]: I0313 01:11:06.739067 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4cm65\" (UniqueName: \"kubernetes.io/projected/348e0611-5b3c-4238-a571-813fc16057df-kube-api-access-4cm65\") pod \"mtu-prober-fzkjs\" (UID: \"348e0611-5b3c-4238-a571-813fc16057df\") " pod="openshift-network-operator/mtu-prober-fzkjs" Mar 13 01:11:06.769610 master-0 kubenswrapper[3985]: I0313 01:11:06.769544 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4cm65\" (UniqueName: \"kubernetes.io/projected/348e0611-5b3c-4238-a571-813fc16057df-kube-api-access-4cm65\") pod \"mtu-prober-fzkjs\" (UID: \"348e0611-5b3c-4238-a571-813fc16057df\") " pod="openshift-network-operator/mtu-prober-fzkjs" Mar 13 01:11:06.774091 master-0 kubenswrapper[3985]: I0313 01:11:06.774039 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fzkjs" Mar 13 01:11:06.793975 master-0 kubenswrapper[3985]: W0313 01:11:06.793892 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod348e0611_5b3c_4238_a571_813fc16057df.slice/crio-bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a WatchSource:0}: Error finding container bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a: Status 404 returned error can't find the container with id bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a Mar 13 01:11:07.413600 master-0 kubenswrapper[3985]: I0313 01:11:07.413316 3985 generic.go:334] "Generic (PLEG): container finished" podID="348e0611-5b3c-4238-a571-813fc16057df" containerID="53dcbd61cdb4ba2de960bb2099fda9de5cc31628732654b744e0b56ff9b97460" exitCode=0 Mar 13 01:11:07.413600 master-0 kubenswrapper[3985]: I0313 01:11:07.413390 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-fzkjs" event={"ID":"348e0611-5b3c-4238-a571-813fc16057df","Type":"ContainerDied","Data":"53dcbd61cdb4ba2de960bb2099fda9de5cc31628732654b744e0b56ff9b97460"} Mar 13 01:11:07.413600 master-0 kubenswrapper[3985]: I0313 01:11:07.413440 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-fzkjs" event={"ID":"348e0611-5b3c-4238-a571-813fc16057df","Type":"ContainerStarted","Data":"bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a"} Mar 13 01:11:08.444353 master-0 kubenswrapper[3985]: I0313 01:11:08.444216 3985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fzkjs" Mar 13 01:11:08.553853 master-0 kubenswrapper[3985]: I0313 01:11:08.553701 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4cm65\" (UniqueName: \"kubernetes.io/projected/348e0611-5b3c-4238-a571-813fc16057df-kube-api-access-4cm65\") pod \"348e0611-5b3c-4238-a571-813fc16057df\" (UID: \"348e0611-5b3c-4238-a571-813fc16057df\") " Mar 13 01:11:08.560042 master-0 kubenswrapper[3985]: I0313 01:11:08.559941 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348e0611-5b3c-4238-a571-813fc16057df-kube-api-access-4cm65" (OuterVolumeSpecName: "kube-api-access-4cm65") pod "348e0611-5b3c-4238-a571-813fc16057df" (UID: "348e0611-5b3c-4238-a571-813fc16057df"). InnerVolumeSpecName "kube-api-access-4cm65". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:11:08.655114 master-0 kubenswrapper[3985]: I0313 01:11:08.654965 3985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4cm65\" (UniqueName: \"kubernetes.io/projected/348e0611-5b3c-4238-a571-813fc16057df-kube-api-access-4cm65\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:09.200619 master-0 kubenswrapper[3985]: I0313 01:11:09.200280 3985 scope.go:117] "RemoveContainer" containerID="6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c" Mar 13 01:11:09.200619 master-0 kubenswrapper[3985]: I0313 01:11:09.201241 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 13 01:11:09.420258 master-0 kubenswrapper[3985]: I0313 01:11:09.420168 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/mtu-prober-fzkjs" event={"ID":"348e0611-5b3c-4238-a571-813fc16057df","Type":"ContainerDied","Data":"bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a"} Mar 13 01:11:09.420258 master-0 kubenswrapper[3985]: I0313 01:11:09.420229 3985 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a" Mar 13 01:11:09.420258 master-0 kubenswrapper[3985]: I0313 01:11:09.420257 3985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/mtu-prober-fzkjs" Mar 13 01:11:10.431085 master-0 kubenswrapper[3985]: I0313 01:11:10.430970 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 01:11:10.433433 master-0 kubenswrapper[3985]: I0313 01:11:10.433368 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"42bca1f920cccc1592fa3eb549dd4fbc400b4f25b9bcf7ef0e6efb375c7c1e44"} Mar 13 01:11:10.456590 master-0 kubenswrapper[3985]: I0313 01:11:10.456326 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" podStartSLOduration=1.4562950909999999 podStartE2EDuration="1.456295091s" podCreationTimestamp="2026-03-13 01:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:11:10.45626727 +0000 UTC m=+56.332947574" watchObservedRunningTime="2026-03-13 01:11:10.456295091 +0000 UTC m=+56.332975335" Mar 13 01:11:11.455240 master-0 kubenswrapper[3985]: I0313 01:11:11.455164 3985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-network-operator/mtu-prober-fzkjs"] Mar 13 01:11:11.461827 master-0 kubenswrapper[3985]: I0313 01:11:11.461764 3985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-network-operator/mtu-prober-fzkjs"] Mar 13 01:11:13.183299 master-0 kubenswrapper[3985]: I0313 01:11:13.183037 3985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="348e0611-5b3c-4238-a571-813fc16057df" path="/var/lib/kubelet/pods/348e0611-5b3c-4238-a571-813fc16057df/volumes" Mar 13 01:11:13.903906 master-0 kubenswrapper[3985]: I0313 01:11:13.903773 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:11:13.904419 master-0 kubenswrapper[3985]: E0313 01:11:13.904064 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:13.904419 master-0 kubenswrapper[3985]: E0313 01:11:13.904213 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:11:29.904177785 +0000 UTC m=+75.780858029 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:16.462532 master-0 kubenswrapper[3985]: I0313 01:11:16.462431 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-xk75p"] Mar 13 01:11:16.463493 master-0 kubenswrapper[3985]: E0313 01:11:16.462636 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:11:16.463493 master-0 kubenswrapper[3985]: I0313 01:11:16.462668 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:11:16.463493 master-0 kubenswrapper[3985]: I0313 01:11:16.462723 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:11:16.463493 master-0 kubenswrapper[3985]: I0313 01:11:16.463253 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.469052 master-0 kubenswrapper[3985]: I0313 01:11:16.468878 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 01:11:16.469381 master-0 kubenswrapper[3985]: I0313 01:11:16.469270 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 01:11:16.470693 master-0 kubenswrapper[3985]: I0313 01:11:16.469938 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 01:11:16.470693 master-0 kubenswrapper[3985]: I0313 01:11:16.470213 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 01:11:16.624383 master-0 kubenswrapper[3985]: I0313 01:11:16.624278 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.624712 master-0 kubenswrapper[3985]: I0313 01:11:16.624438 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.624712 master-0 kubenswrapper[3985]: I0313 01:11:16.624557 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.624712 master-0 kubenswrapper[3985]: I0313 01:11:16.624595 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.624829 master-0 kubenswrapper[3985]: I0313 01:11:16.624747 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.624829 master-0 kubenswrapper[3985]: I0313 01:11:16.624781 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.624910 master-0 kubenswrapper[3985]: I0313 01:11:16.624857 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.624951 master-0 kubenswrapper[3985]: I0313 01:11:16.624933 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625002 master-0 kubenswrapper[3985]: I0313 01:11:16.624968 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625054 master-0 kubenswrapper[3985]: I0313 01:11:16.625000 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625100 master-0 kubenswrapper[3985]: I0313 01:11:16.625073 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625140 master-0 kubenswrapper[3985]: I0313 01:11:16.625109 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625224 master-0 kubenswrapper[3985]: I0313 01:11:16.625159 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625282 master-0 kubenswrapper[3985]: I0313 01:11:16.625242 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625325 master-0 kubenswrapper[3985]: I0313 01:11:16.625280 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjkgv\" (UniqueName: \"kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625325 master-0 kubenswrapper[3985]: I0313 01:11:16.625312 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.625402 master-0 kubenswrapper[3985]: I0313 01:11:16.625347 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.692584 master-0 kubenswrapper[3985]: I0313 01:11:16.688809 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-mjh5s"] Mar 13 01:11:16.692919 master-0 kubenswrapper[3985]: I0313 01:11:16.692764 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.697160 master-0 kubenswrapper[3985]: I0313 01:11:16.696798 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 01:11:16.697160 master-0 kubenswrapper[3985]: I0313 01:11:16.696809 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726013 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726089 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726154 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726194 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726243 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726283 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726316 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726347 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726378 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726412 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726445 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726478 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726523 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkgv\" (UniqueName: \"kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726592 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726625 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726661 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.726997 master-0 kubenswrapper[3985]: I0313 01:11:16.726697 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.727949 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728039 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728458 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728524 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728594 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728670 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728738 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728749 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728742 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.728845 master-0 kubenswrapper[3985]: I0313 01:11:16.728847 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.729796 master-0 kubenswrapper[3985]: I0313 01:11:16.728915 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.729796 master-0 kubenswrapper[3985]: I0313 01:11:16.728959 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.729796 master-0 kubenswrapper[3985]: I0313 01:11:16.729020 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.729796 master-0 kubenswrapper[3985]: I0313 01:11:16.729080 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.729796 master-0 kubenswrapper[3985]: I0313 01:11:16.729160 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.730190 master-0 kubenswrapper[3985]: I0313 01:11:16.729899 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.753324 master-0 kubenswrapper[3985]: I0313 01:11:16.753196 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkgv\" (UniqueName: \"kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.789338 master-0 kubenswrapper[3985]: I0313 01:11:16.789230 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-xk75p" Mar 13 01:11:16.804352 master-0 kubenswrapper[3985]: W0313 01:11:16.804289 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podde46c12a_aa3e_442e_bcc4_365d05f50103.slice/crio-24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48 WatchSource:0}: Error finding container 24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48: Status 404 returned error can't find the container with id 24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48 Mar 13 01:11:16.828236 master-0 kubenswrapper[3985]: I0313 01:11:16.828155 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.828643 master-0 kubenswrapper[3985]: I0313 01:11:16.828332 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.828643 master-0 kubenswrapper[3985]: I0313 01:11:16.828401 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.828643 master-0 kubenswrapper[3985]: I0313 01:11:16.828421 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.828643 master-0 kubenswrapper[3985]: I0313 01:11:16.828439 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.828643 master-0 kubenswrapper[3985]: I0313 01:11:16.828587 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.828972 master-0 kubenswrapper[3985]: I0313 01:11:16.828876 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.829072 master-0 kubenswrapper[3985]: I0313 01:11:16.829028 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dlx5\" (UniqueName: \"kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930397 master-0 kubenswrapper[3985]: I0313 01:11:16.930236 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlx5\" (UniqueName: \"kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930397 master-0 kubenswrapper[3985]: I0313 01:11:16.930340 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930397 master-0 kubenswrapper[3985]: I0313 01:11:16.930383 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930928 master-0 kubenswrapper[3985]: I0313 01:11:16.930450 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930928 master-0 kubenswrapper[3985]: I0313 01:11:16.930491 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930928 master-0 kubenswrapper[3985]: I0313 01:11:16.930562 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930928 master-0 kubenswrapper[3985]: I0313 01:11:16.930595 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930928 master-0 kubenswrapper[3985]: I0313 01:11:16.930628 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930928 master-0 kubenswrapper[3985]: I0313 01:11:16.930844 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.930928 master-0 kubenswrapper[3985]: I0313 01:11:16.930922 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.931332 master-0 kubenswrapper[3985]: I0313 01:11:16.931075 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.931332 master-0 kubenswrapper[3985]: I0313 01:11:16.931125 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.932022 master-0 kubenswrapper[3985]: I0313 01:11:16.931976 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.932111 master-0 kubenswrapper[3985]: I0313 01:11:16.932091 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.932933 master-0 kubenswrapper[3985]: I0313 01:11:16.932855 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:16.959580 master-0 kubenswrapper[3985]: I0313 01:11:16.959460 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlx5\" (UniqueName: \"kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:17.014633 master-0 kubenswrapper[3985]: I0313 01:11:17.014496 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:11:17.033797 master-0 kubenswrapper[3985]: W0313 01:11:17.033630 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf91b91e8_6d3d_42b9_a158_b22a5a0cc7fd.slice/crio-35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038 WatchSource:0}: Error finding container 35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038: Status 404 returned error can't find the container with id 35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038 Mar 13 01:11:17.453568 master-0 kubenswrapper[3985]: I0313 01:11:17.453363 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerStarted","Data":"35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038"} Mar 13 01:11:17.455032 master-0 kubenswrapper[3985]: I0313 01:11:17.454958 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xk75p" event={"ID":"de46c12a-aa3e-442e-bcc4-365d05f50103","Type":"ContainerStarted","Data":"24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48"} Mar 13 01:11:17.474055 master-0 kubenswrapper[3985]: I0313 01:11:17.473094 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-9hwz9"] Mar 13 01:11:17.474055 master-0 kubenswrapper[3985]: I0313 01:11:17.473634 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:17.474055 master-0 kubenswrapper[3985]: E0313 01:11:17.473772 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:17.638250 master-0 kubenswrapper[3985]: I0313 01:11:17.638162 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:17.638250 master-0 kubenswrapper[3985]: I0313 01:11:17.638236 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj7cp\" (UniqueName: \"kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:17.739734 master-0 kubenswrapper[3985]: I0313 01:11:17.739593 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:17.739734 master-0 kubenswrapper[3985]: I0313 01:11:17.739667 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7cp\" (UniqueName: \"kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:17.740311 master-0 kubenswrapper[3985]: E0313 01:11:17.740264 3985 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:17.740609 master-0 kubenswrapper[3985]: E0313 01:11:17.740502 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:11:18.240449831 +0000 UTC m=+64.117130055 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:17.794187 master-0 kubenswrapper[3985]: I0313 01:11:17.794098 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7cp\" (UniqueName: \"kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:18.243814 master-0 kubenswrapper[3985]: I0313 01:11:18.243621 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:18.243814 master-0 kubenswrapper[3985]: E0313 01:11:18.243771 3985 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:18.244398 master-0 kubenswrapper[3985]: E0313 01:11:18.244073 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:11:19.244056183 +0000 UTC m=+65.120736387 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:19.178968 master-0 kubenswrapper[3985]: I0313 01:11:19.178910 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:19.179498 master-0 kubenswrapper[3985]: E0313 01:11:19.179453 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:19.251321 master-0 kubenswrapper[3985]: I0313 01:11:19.250818 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:19.251321 master-0 kubenswrapper[3985]: E0313 01:11:19.250956 3985 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:19.251321 master-0 kubenswrapper[3985]: E0313 01:11:19.251007 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:11:21.250993466 +0000 UTC m=+67.127673680 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:20.467977 master-0 kubenswrapper[3985]: I0313 01:11:20.467827 3985 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="10183ca532088fab9b3fb6cb86be21e2b5c24c18173f81ce8ac9d9efb43524c5" exitCode=0 Mar 13 01:11:20.467977 master-0 kubenswrapper[3985]: I0313 01:11:20.467961 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerDied","Data":"10183ca532088fab9b3fb6cb86be21e2b5c24c18173f81ce8ac9d9efb43524c5"} Mar 13 01:11:21.177390 master-0 kubenswrapper[3985]: I0313 01:11:21.177320 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:21.177721 master-0 kubenswrapper[3985]: E0313 01:11:21.177482 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:21.268710 master-0 kubenswrapper[3985]: I0313 01:11:21.268637 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:21.268985 master-0 kubenswrapper[3985]: E0313 01:11:21.268883 3985 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:21.269082 master-0 kubenswrapper[3985]: E0313 01:11:21.269046 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:11:25.269009179 +0000 UTC m=+71.145689403 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:23.176714 master-0 kubenswrapper[3985]: I0313 01:11:23.176659 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:23.177466 master-0 kubenswrapper[3985]: E0313 01:11:23.176824 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:25.177115 master-0 kubenswrapper[3985]: I0313 01:11:25.177027 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:25.177946 master-0 kubenswrapper[3985]: E0313 01:11:25.177349 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:25.304557 master-0 kubenswrapper[3985]: I0313 01:11:25.304471 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:25.304804 master-0 kubenswrapper[3985]: E0313 01:11:25.304641 3985 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:25.304804 master-0 kubenswrapper[3985]: E0313 01:11:25.304708 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:11:33.304689651 +0000 UTC m=+79.181369865 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:27.176986 master-0 kubenswrapper[3985]: I0313 01:11:27.176892 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:27.177630 master-0 kubenswrapper[3985]: E0313 01:11:27.177037 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:28.960946 master-0 kubenswrapper[3985]: I0313 01:11:28.960869 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp"] Mar 13 01:11:28.961674 master-0 kubenswrapper[3985]: I0313 01:11:28.961343 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:28.966631 master-0 kubenswrapper[3985]: I0313 01:11:28.964235 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 01:11:28.966631 master-0 kubenswrapper[3985]: I0313 01:11:28.964472 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 01:11:28.966631 master-0 kubenswrapper[3985]: I0313 01:11:28.964819 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 01:11:28.966631 master-0 kubenswrapper[3985]: I0313 01:11:28.964902 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 01:11:28.966631 master-0 kubenswrapper[3985]: I0313 01:11:28.965148 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 01:11:29.048327 master-0 kubenswrapper[3985]: I0313 01:11:29.047701 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hwpff"] Mar 13 01:11:29.048660 master-0 kubenswrapper[3985]: I0313 01:11:29.048472 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.050044 master-0 kubenswrapper[3985]: I0313 01:11:29.049813 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 01:11:29.051366 master-0 kubenswrapper[3985]: I0313 01:11:29.050373 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 01:11:29.138125 master-0 kubenswrapper[3985]: I0313 01:11:29.137870 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.138125 master-0 kubenswrapper[3985]: I0313 01:11:29.137941 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.138125 master-0 kubenswrapper[3985]: I0313 01:11:29.138001 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.138125 master-0 kubenswrapper[3985]: I0313 01:11:29.138058 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6wzz\" (UniqueName: \"kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.177430 master-0 kubenswrapper[3985]: I0313 01:11:29.177379 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:29.177791 master-0 kubenswrapper[3985]: E0313 01:11:29.177520 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:29.238831 master-0 kubenswrapper[3985]: I0313 01:11:29.238764 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-netns\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.238831 master-0 kubenswrapper[3985]: I0313 01:11:29.238829 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-node-log\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239123 master-0 kubenswrapper[3985]: I0313 01:11:29.238883 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wzz\" (UniqueName: \"kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.239123 master-0 kubenswrapper[3985]: I0313 01:11:29.239067 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239123 master-0 kubenswrapper[3985]: I0313 01:11:29.239094 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-log-socket\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239123 master-0 kubenswrapper[3985]: I0313 01:11:29.239119 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9sr\" (UniqueName: \"kubernetes.io/projected/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-kube-api-access-tn9sr\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239247 master-0 kubenswrapper[3985]: I0313 01:11:29.239150 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.239247 master-0 kubenswrapper[3985]: I0313 01:11:29.239176 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-netd\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239247 master-0 kubenswrapper[3985]: I0313 01:11:29.239203 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovn-node-metrics-cert\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239247 master-0 kubenswrapper[3985]: I0313 01:11:29.239235 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.239362 master-0 kubenswrapper[3985]: I0313 01:11:29.239284 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-bin\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239362 master-0 kubenswrapper[3985]: I0313 01:11:29.239314 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-script-lib\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239362 master-0 kubenswrapper[3985]: I0313 01:11:29.239335 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-systemd\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239362 master-0 kubenswrapper[3985]: I0313 01:11:29.239353 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-slash\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.239471 master-0 kubenswrapper[3985]: I0313 01:11:29.239369 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.239418 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-config\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.240539 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.240782 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-env-overrides\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.240837 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-var-lib-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.240990 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.241028 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.241062 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-systemd-units\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.241206 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-etc-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.241233 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-kubelet\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.241254 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-ovn\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.241920 master-0 kubenswrapper[3985]: I0313 01:11:29.241667 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.253016 master-0 kubenswrapper[3985]: I0313 01:11:29.252975 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.256202 master-0 kubenswrapper[3985]: I0313 01:11:29.256164 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wzz\" (UniqueName: \"kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.278628 master-0 kubenswrapper[3985]: I0313 01:11:29.276915 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:11:29.342021 master-0 kubenswrapper[3985]: I0313 01:11:29.341892 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342021 master-0 kubenswrapper[3985]: I0313 01:11:29.341979 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-systemd-units\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342021 master-0 kubenswrapper[3985]: I0313 01:11:29.342014 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-etc-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342021 master-0 kubenswrapper[3985]: I0313 01:11:29.342041 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-kubelet\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342067 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-ovn\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342094 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-netns\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342116 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-node-log\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342136 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342159 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-log-socket\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342195 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9sr\" (UniqueName: \"kubernetes.io/projected/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-kube-api-access-tn9sr\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342218 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-netd\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342241 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovn-node-metrics-cert\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342265 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-bin\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342286 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-script-lib\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342303 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-systemd\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342336 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342353 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-config\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342371 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-env-overrides\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342389 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-slash\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342407 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-var-lib-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.342496 master-0 kubenswrapper[3985]: I0313 01:11:29.342484 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-var-lib-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342547 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342580 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-systemd-units\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342608 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-etc-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342628 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-kubelet\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342652 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-ovn\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342675 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-netns\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342705 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-node-log\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342735 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-openvswitch\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.342757 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-log-socket\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.343039 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-systemd\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.343156 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-ovn-kubernetes\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.343188 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-netd\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344148 master-0 kubenswrapper[3985]: I0313 01:11:29.343915 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-config\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344977 master-0 kubenswrapper[3985]: I0313 01:11:29.344232 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-slash\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344977 master-0 kubenswrapper[3985]: I0313 01:11:29.344267 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-bin\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.344977 master-0 kubenswrapper[3985]: I0313 01:11:29.344868 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-script-lib\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.345259 master-0 kubenswrapper[3985]: I0313 01:11:29.345212 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-env-overrides\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.351842 master-0 kubenswrapper[3985]: I0313 01:11:29.348658 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovn-node-metrics-cert\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.363780 master-0 kubenswrapper[3985]: I0313 01:11:29.363708 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9sr\" (UniqueName: \"kubernetes.io/projected/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-kube-api-access-tn9sr\") pod \"ovnkube-node-hwpff\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.375114 master-0 kubenswrapper[3985]: I0313 01:11:29.375052 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:29.946161 master-0 kubenswrapper[3985]: I0313 01:11:29.946121 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:11:29.946307 master-0 kubenswrapper[3985]: E0313 01:11:29.946283 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:29.946370 master-0 kubenswrapper[3985]: E0313 01:11:29.946339 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:01.946324712 +0000 UTC m=+107.823004926 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:11:30.496705 master-0 kubenswrapper[3985]: I0313 01:11:30.496625 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" event={"ID":"8c377a67-e763-4925-afae-a7f8546a369b","Type":"ContainerStarted","Data":"c3fb12652881e8adb2430de2ca8198acdcb78af68f4536ee5b8b1f379fabbbfb"} Mar 13 01:11:30.496705 master-0 kubenswrapper[3985]: I0313 01:11:30.496686 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" event={"ID":"8c377a67-e763-4925-afae-a7f8546a369b","Type":"ContainerStarted","Data":"ec17a1f92974fc202f31cbb68ea7af983419d8c972a92fa5e88ff84c017f8e6d"} Mar 13 01:11:30.497993 master-0 kubenswrapper[3985]: I0313 01:11:30.497964 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-xk75p" event={"ID":"de46c12a-aa3e-442e-bcc4-365d05f50103","Type":"ContainerStarted","Data":"c3557efb9b2713233f394c7c4c7cb3a3ec55c443a0dd5e90e2eb7988c8ba853c"} Mar 13 01:11:30.500287 master-0 kubenswrapper[3985]: I0313 01:11:30.500230 3985 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="1c472f002bfa4991c063677c722842d806f2f0b4d30948f00ee774d9c40c71d2" exitCode=0 Mar 13 01:11:30.500287 master-0 kubenswrapper[3985]: I0313 01:11:30.500271 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerDied","Data":"1c472f002bfa4991c063677c722842d806f2f0b4d30948f00ee774d9c40c71d2"} Mar 13 01:11:30.503925 master-0 kubenswrapper[3985]: I0313 01:11:30.503856 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"4d381865a624bc04fbd2468a95ed0546e2a4ca37142c78f395f943846511aab8"} Mar 13 01:11:30.519118 master-0 kubenswrapper[3985]: I0313 01:11:30.519042 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-xk75p" podStartSLOduration=1.400585237 podStartE2EDuration="14.519016594s" podCreationTimestamp="2026-03-13 01:11:16 +0000 UTC" firstStartedPulling="2026-03-13 01:11:16.807820543 +0000 UTC m=+62.684500797" lastFinishedPulling="2026-03-13 01:11:29.92625194 +0000 UTC m=+75.802932154" observedRunningTime="2026-03-13 01:11:30.517234466 +0000 UTC m=+76.393914690" watchObservedRunningTime="2026-03-13 01:11:30.519016594 +0000 UTC m=+76.395696808" Mar 13 01:11:31.177665 master-0 kubenswrapper[3985]: I0313 01:11:31.177608 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:31.177981 master-0 kubenswrapper[3985]: E0313 01:11:31.177780 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:32.659743 master-0 kubenswrapper[3985]: I0313 01:11:32.658565 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-target-49pfj"] Mar 13 01:11:32.659743 master-0 kubenswrapper[3985]: I0313 01:11:32.659094 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:32.659743 master-0 kubenswrapper[3985]: E0313 01:11:32.659190 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:32.786099 master-0 kubenswrapper[3985]: I0313 01:11:32.786043 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:32.887029 master-0 kubenswrapper[3985]: I0313 01:11:32.886934 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:33.024632 master-0 kubenswrapper[3985]: E0313 01:11:33.024540 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 01:11:33.024632 master-0 kubenswrapper[3985]: E0313 01:11:33.024610 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 01:11:33.024632 master-0 kubenswrapper[3985]: E0313 01:11:33.024633 3985 projected.go:194] Error preparing data for projected volume kube-api-access-tlgsr for pod openshift-network-diagnostics/network-check-target-49pfj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:33.025132 master-0 kubenswrapper[3985]: E0313 01:11:33.024732 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr podName:34889110-f282-4c2c-a2b0-620033559e1b nodeName:}" failed. No retries permitted until 2026-03-13 01:11:33.524700096 +0000 UTC m=+79.401380490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tlgsr" (UniqueName: "kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr") pod "network-check-target-49pfj" (UID: "34889110-f282-4c2c-a2b0-620033559e1b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:33.177935 master-0 kubenswrapper[3985]: I0313 01:11:33.177878 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:33.178301 master-0 kubenswrapper[3985]: E0313 01:11:33.178180 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:33.396898 master-0 kubenswrapper[3985]: I0313 01:11:33.396766 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:33.397148 master-0 kubenswrapper[3985]: E0313 01:11:33.397011 3985 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:33.397148 master-0 kubenswrapper[3985]: E0313 01:11:33.397141 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:11:49.397094446 +0000 UTC m=+95.273774840 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:33.598548 master-0 kubenswrapper[3985]: I0313 01:11:33.598458 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:33.598764 master-0 kubenswrapper[3985]: E0313 01:11:33.598640 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 01:11:33.598764 master-0 kubenswrapper[3985]: E0313 01:11:33.598660 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 01:11:33.598764 master-0 kubenswrapper[3985]: E0313 01:11:33.598673 3985 projected.go:194] Error preparing data for projected volume kube-api-access-tlgsr for pod openshift-network-diagnostics/network-check-target-49pfj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:33.598764 master-0 kubenswrapper[3985]: E0313 01:11:33.598733 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr podName:34889110-f282-4c2c-a2b0-620033559e1b nodeName:}" failed. No retries permitted until 2026-03-13 01:11:34.598715935 +0000 UTC m=+80.475396149 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-tlgsr" (UniqueName: "kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr") pod "network-check-target-49pfj" (UID: "34889110-f282-4c2c-a2b0-620033559e1b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:34.183169 master-0 kubenswrapper[3985]: I0313 01:11:34.182865 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:34.184485 master-0 kubenswrapper[3985]: E0313 01:11:34.183735 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:34.188588 master-0 kubenswrapper[3985]: W0313 01:11:34.187077 3985 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 01:11:34.188588 master-0 kubenswrapper[3985]: I0313 01:11:34.187946 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 01:11:34.199034 master-0 kubenswrapper[3985]: I0313 01:11:34.198962 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 01:11:34.629565 master-0 kubenswrapper[3985]: I0313 01:11:34.628455 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-mcps9"] Mar 13 01:11:34.629565 master-0 kubenswrapper[3985]: I0313 01:11:34.628924 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.633764 master-0 kubenswrapper[3985]: I0313 01:11:34.631595 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 01:11:34.633764 master-0 kubenswrapper[3985]: I0313 01:11:34.631844 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 01:11:34.633764 master-0 kubenswrapper[3985]: I0313 01:11:34.631964 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 01:11:34.633764 master-0 kubenswrapper[3985]: I0313 01:11:34.632439 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 01:11:34.633764 master-0 kubenswrapper[3985]: I0313 01:11:34.632609 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 01:11:34.687152 master-0 kubenswrapper[3985]: I0313 01:11:34.686561 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0-master-0" podStartSLOduration=1.686532156 podStartE2EDuration="1.686532156s" podCreationTimestamp="2026-03-13 01:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:11:34.664021953 +0000 UTC m=+80.540702207" watchObservedRunningTime="2026-03-13 01:11:34.686532156 +0000 UTC m=+80.563212370" Mar 13 01:11:34.687152 master-0 kubenswrapper[3985]: I0313 01:11:34.686908 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-scheduler-master-0" podStartSLOduration=0.686902994 podStartE2EDuration="686.902994ms" podCreationTimestamp="2026-03-13 01:11:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:11:34.686782161 +0000 UTC m=+80.563462385" watchObservedRunningTime="2026-03-13 01:11:34.686902994 +0000 UTC m=+80.563583208" Mar 13 01:11:34.689315 master-0 kubenswrapper[3985]: I0313 01:11:34.689258 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:34.689382 master-0 kubenswrapper[3985]: I0313 01:11:34.689329 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.689559 master-0 kubenswrapper[3985]: E0313 01:11:34.689492 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 01:11:34.689559 master-0 kubenswrapper[3985]: E0313 01:11:34.689560 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 01:11:34.689627 master-0 kubenswrapper[3985]: E0313 01:11:34.689576 3985 projected.go:194] Error preparing data for projected volume kube-api-access-tlgsr for pod openshift-network-diagnostics/network-check-target-49pfj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:34.689663 master-0 kubenswrapper[3985]: E0313 01:11:34.689644 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr podName:34889110-f282-4c2c-a2b0-620033559e1b nodeName:}" failed. No retries permitted until 2026-03-13 01:11:36.689615591 +0000 UTC m=+82.566295815 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-tlgsr" (UniqueName: "kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr") pod "network-check-target-49pfj" (UID: "34889110-f282-4c2c-a2b0-620033559e1b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:34.689803 master-0 kubenswrapper[3985]: I0313 01:11:34.689736 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.689870 master-0 kubenswrapper[3985]: I0313 01:11:34.689846 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-txxbg\" (UniqueName: \"kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.690030 master-0 kubenswrapper[3985]: I0313 01:11:34.689970 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.791318 master-0 kubenswrapper[3985]: I0313 01:11:34.790985 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txxbg\" (UniqueName: \"kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.791318 master-0 kubenswrapper[3985]: I0313 01:11:34.791085 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.791318 master-0 kubenswrapper[3985]: I0313 01:11:34.791189 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.791318 master-0 kubenswrapper[3985]: I0313 01:11:34.791211 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.793407 master-0 kubenswrapper[3985]: I0313 01:11:34.792347 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.793407 master-0 kubenswrapper[3985]: I0313 01:11:34.793278 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.798485 master-0 kubenswrapper[3985]: I0313 01:11:34.798434 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.816632 master-0 kubenswrapper[3985]: I0313 01:11:34.816550 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txxbg\" (UniqueName: \"kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:34.944581 master-0 kubenswrapper[3985]: I0313 01:11:34.944501 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:11:35.176683 master-0 kubenswrapper[3985]: I0313 01:11:35.176532 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:35.177942 master-0 kubenswrapper[3985]: E0313 01:11:35.177808 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:35.521988 master-0 kubenswrapper[3985]: I0313 01:11:35.521904 3985 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="2b884799b97327428feac7cdc419e91ce2a3eaeb0bebe09185e54d595c2b45d1" exitCode=0 Mar 13 01:11:35.523137 master-0 kubenswrapper[3985]: I0313 01:11:35.521997 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerDied","Data":"2b884799b97327428feac7cdc419e91ce2a3eaeb0bebe09185e54d595c2b45d1"} Mar 13 01:11:35.525711 master-0 kubenswrapper[3985]: I0313 01:11:35.525654 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-mcps9" event={"ID":"c687237e-50e5-405d-8fef-0efbc3866630","Type":"ContainerStarted","Data":"458ff1fddfd5f3b95a485a4b0cb8e88a31c5825a6f8733cb5141f441c672f2be"} Mar 13 01:11:36.177887 master-0 kubenswrapper[3985]: I0313 01:11:36.177091 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:36.177887 master-0 kubenswrapper[3985]: E0313 01:11:36.177266 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:36.712725 master-0 kubenswrapper[3985]: I0313 01:11:36.712588 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:36.713207 master-0 kubenswrapper[3985]: E0313 01:11:36.712791 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 01:11:36.713207 master-0 kubenswrapper[3985]: E0313 01:11:36.712812 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 01:11:36.713207 master-0 kubenswrapper[3985]: E0313 01:11:36.712823 3985 projected.go:194] Error preparing data for projected volume kube-api-access-tlgsr for pod openshift-network-diagnostics/network-check-target-49pfj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:36.713207 master-0 kubenswrapper[3985]: E0313 01:11:36.712887 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr podName:34889110-f282-4c2c-a2b0-620033559e1b nodeName:}" failed. No retries permitted until 2026-03-13 01:11:40.71286938 +0000 UTC m=+86.589549594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-tlgsr" (UniqueName: "kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr") pod "network-check-target-49pfj" (UID: "34889110-f282-4c2c-a2b0-620033559e1b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:37.177120 master-0 kubenswrapper[3985]: I0313 01:11:37.177001 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:37.179774 master-0 kubenswrapper[3985]: E0313 01:11:37.179444 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:37.544420 master-0 kubenswrapper[3985]: I0313 01:11:37.544368 3985 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="79b311e1fab325ef8d97bf345a46f71efc38634e77d8ae4e5e2904a28462f5b3" exitCode=0 Mar 13 01:11:37.544614 master-0 kubenswrapper[3985]: I0313 01:11:37.544432 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerDied","Data":"79b311e1fab325ef8d97bf345a46f71efc38634e77d8ae4e5e2904a28462f5b3"} Mar 13 01:11:38.177211 master-0 kubenswrapper[3985]: I0313 01:11:38.177145 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:38.177751 master-0 kubenswrapper[3985]: E0313 01:11:38.177360 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:39.177110 master-0 kubenswrapper[3985]: I0313 01:11:39.176540 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:39.177110 master-0 kubenswrapper[3985]: E0313 01:11:39.176663 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:40.177236 master-0 kubenswrapper[3985]: I0313 01:11:40.177168 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:40.177556 master-0 kubenswrapper[3985]: E0313 01:11:40.177341 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:40.774145 master-0 kubenswrapper[3985]: I0313 01:11:40.774075 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:40.774386 master-0 kubenswrapper[3985]: E0313 01:11:40.774288 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 01:11:40.774386 master-0 kubenswrapper[3985]: E0313 01:11:40.774307 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 01:11:40.774386 master-0 kubenswrapper[3985]: E0313 01:11:40.774321 3985 projected.go:194] Error preparing data for projected volume kube-api-access-tlgsr for pod openshift-network-diagnostics/network-check-target-49pfj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:40.774386 master-0 kubenswrapper[3985]: E0313 01:11:40.774388 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr podName:34889110-f282-4c2c-a2b0-620033559e1b nodeName:}" failed. No retries permitted until 2026-03-13 01:11:48.774369122 +0000 UTC m=+94.651049336 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-tlgsr" (UniqueName: "kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr") pod "network-check-target-49pfj" (UID: "34889110-f282-4c2c-a2b0-620033559e1b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:41.177796 master-0 kubenswrapper[3985]: I0313 01:11:41.177655 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:41.179778 master-0 kubenswrapper[3985]: E0313 01:11:41.177833 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:42.176852 master-0 kubenswrapper[3985]: I0313 01:11:42.176772 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:42.177883 master-0 kubenswrapper[3985]: E0313 01:11:42.176943 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:43.177892 master-0 kubenswrapper[3985]: I0313 01:11:43.177209 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:43.177892 master-0 kubenswrapper[3985]: E0313 01:11:43.177412 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:44.176764 master-0 kubenswrapper[3985]: I0313 01:11:44.176688 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:44.177219 master-0 kubenswrapper[3985]: E0313 01:11:44.176914 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:45.176975 master-0 kubenswrapper[3985]: I0313 01:11:45.176889 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:45.177777 master-0 kubenswrapper[3985]: E0313 01:11:45.177708 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:46.177384 master-0 kubenswrapper[3985]: I0313 01:11:46.177290 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:46.178685 master-0 kubenswrapper[3985]: E0313 01:11:46.177555 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:47.177931 master-0 kubenswrapper[3985]: I0313 01:11:47.177525 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:47.178756 master-0 kubenswrapper[3985]: E0313 01:11:47.178026 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:48.177320 master-0 kubenswrapper[3985]: I0313 01:11:48.177160 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:48.177320 master-0 kubenswrapper[3985]: E0313 01:11:48.177279 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:48.800151 master-0 kubenswrapper[3985]: I0313 01:11:48.800081 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerStarted","Data":"2624aa9d22934134d13192016a21d94a8ed206c5e3cce209796939167e9e62b2"} Mar 13 01:11:48.802305 master-0 kubenswrapper[3985]: I0313 01:11:48.802281 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-mcps9" event={"ID":"c687237e-50e5-405d-8fef-0efbc3866630","Type":"ContainerStarted","Data":"e3df065b008ac8246da3fd6b761b20c8995bcf9a520ea600ac069af7886f11c0"} Mar 13 01:11:48.803333 master-0 kubenswrapper[3985]: I0313 01:11:48.803309 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" event={"ID":"8c377a67-e763-4925-afae-a7f8546a369b","Type":"ContainerStarted","Data":"7e4809732e6f42f6e1aaeab2220c5d3d3098fc28ea26ac8cc73446ea1b10cd93"} Mar 13 01:11:48.804481 master-0 kubenswrapper[3985]: I0313 01:11:48.804453 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd" exitCode=0 Mar 13 01:11:48.804583 master-0 kubenswrapper[3985]: I0313 01:11:48.804483 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} Mar 13 01:11:48.840963 master-0 kubenswrapper[3985]: I0313 01:11:48.840870 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" podStartSLOduration=2.28672954 podStartE2EDuration="20.84084483s" podCreationTimestamp="2026-03-13 01:11:28 +0000 UTC" firstStartedPulling="2026-03-13 01:11:30.071488124 +0000 UTC m=+75.948168338" lastFinishedPulling="2026-03-13 01:11:48.625603414 +0000 UTC m=+94.502283628" observedRunningTime="2026-03-13 01:11:48.840084605 +0000 UTC m=+94.716764839" watchObservedRunningTime="2026-03-13 01:11:48.84084483 +0000 UTC m=+94.717525054" Mar 13 01:11:48.868231 master-0 kubenswrapper[3985]: I0313 01:11:48.867884 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:48.868231 master-0 kubenswrapper[3985]: E0313 01:11:48.868068 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 01:11:48.868231 master-0 kubenswrapper[3985]: E0313 01:11:48.868087 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 01:11:48.868231 master-0 kubenswrapper[3985]: E0313 01:11:48.868099 3985 projected.go:194] Error preparing data for projected volume kube-api-access-tlgsr for pod openshift-network-diagnostics/network-check-target-49pfj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:48.868231 master-0 kubenswrapper[3985]: E0313 01:11:48.868160 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr podName:34889110-f282-4c2c-a2b0-620033559e1b nodeName:}" failed. No retries permitted until 2026-03-13 01:12:04.868140714 +0000 UTC m=+110.744820928 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-tlgsr" (UniqueName: "kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr") pod "network-check-target-49pfj" (UID: "34889110-f282-4c2c-a2b0-620033559e1b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:11:49.177152 master-0 kubenswrapper[3985]: I0313 01:11:49.176791 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:49.177152 master-0 kubenswrapper[3985]: E0313 01:11:49.177029 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:49.476194 master-0 kubenswrapper[3985]: I0313 01:11:49.476093 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:49.476478 master-0 kubenswrapper[3985]: E0313 01:11:49.476313 3985 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:49.476478 master-0 kubenswrapper[3985]: E0313 01:11:49.476464 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:12:21.476431723 +0000 UTC m=+127.353111967 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : object "openshift-multus"/"metrics-daemon-secret" not registered Mar 13 01:11:49.815166 master-0 kubenswrapper[3985]: I0313 01:11:49.815033 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} Mar 13 01:11:49.815166 master-0 kubenswrapper[3985]: I0313 01:11:49.815122 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} Mar 13 01:11:49.815166 master-0 kubenswrapper[3985]: I0313 01:11:49.815145 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} Mar 13 01:11:49.815166 master-0 kubenswrapper[3985]: I0313 01:11:49.815167 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} Mar 13 01:11:49.815166 master-0 kubenswrapper[3985]: I0313 01:11:49.815189 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} Mar 13 01:11:49.817012 master-0 kubenswrapper[3985]: I0313 01:11:49.815209 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} Mar 13 01:11:49.819865 master-0 kubenswrapper[3985]: I0313 01:11:49.819775 3985 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="2624aa9d22934134d13192016a21d94a8ed206c5e3cce209796939167e9e62b2" exitCode=0 Mar 13 01:11:49.820015 master-0 kubenswrapper[3985]: I0313 01:11:49.819899 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerDied","Data":"2624aa9d22934134d13192016a21d94a8ed206c5e3cce209796939167e9e62b2"} Mar 13 01:11:49.823973 master-0 kubenswrapper[3985]: I0313 01:11:49.823903 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-mcps9" event={"ID":"c687237e-50e5-405d-8fef-0efbc3866630","Type":"ContainerStarted","Data":"826ddf0fad5a47b74a9e97796304f54274bf436e1dab02b9917102d0ced785b8"} Mar 13 01:11:50.176752 master-0 kubenswrapper[3985]: I0313 01:11:50.176683 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:50.176908 master-0 kubenswrapper[3985]: E0313 01:11:50.176851 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:50.833550 master-0 kubenswrapper[3985]: I0313 01:11:50.833405 3985 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="0234ab75b7bd5b13b1837cf8436f89b14014ac9adcda65e897e6eb1551c1103a" exitCode=0 Mar 13 01:11:50.834584 master-0 kubenswrapper[3985]: I0313 01:11:50.833600 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerDied","Data":"0234ab75b7bd5b13b1837cf8436f89b14014ac9adcda65e897e6eb1551c1103a"} Mar 13 01:11:50.863444 master-0 kubenswrapper[3985]: I0313 01:11:50.863331 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-node-identity/network-node-identity-mcps9" podStartSLOduration=3.237763186 podStartE2EDuration="16.863303383s" podCreationTimestamp="2026-03-13 01:11:34 +0000 UTC" firstStartedPulling="2026-03-13 01:11:34.964298716 +0000 UTC m=+80.840978930" lastFinishedPulling="2026-03-13 01:11:48.589838913 +0000 UTC m=+94.466519127" observedRunningTime="2026-03-13 01:11:49.87105331 +0000 UTC m=+95.747733584" watchObservedRunningTime="2026-03-13 01:11:50.863303383 +0000 UTC m=+96.739983627" Mar 13 01:11:51.177573 master-0 kubenswrapper[3985]: I0313 01:11:51.177443 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:51.177913 master-0 kubenswrapper[3985]: E0313 01:11:51.177816 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:51.846852 master-0 kubenswrapper[3985]: I0313 01:11:51.846676 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" event={"ID":"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd","Type":"ContainerStarted","Data":"75f9ca78cdf7709ec03b2a29cc76de7a22e00b7088f615a6a43e94612a7326b9"} Mar 13 01:11:51.853018 master-0 kubenswrapper[3985]: I0313 01:11:51.852935 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} Mar 13 01:11:51.930792 master-0 kubenswrapper[3985]: I0313 01:11:51.930021 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-mjh5s" podStartSLOduration=4.467717461 podStartE2EDuration="35.929992319s" podCreationTimestamp="2026-03-13 01:11:16 +0000 UTC" firstStartedPulling="2026-03-13 01:11:17.039272849 +0000 UTC m=+62.915953063" lastFinishedPulling="2026-03-13 01:11:48.501547707 +0000 UTC m=+94.378227921" observedRunningTime="2026-03-13 01:11:51.929833606 +0000 UTC m=+97.806513850" watchObservedRunningTime="2026-03-13 01:11:51.929992319 +0000 UTC m=+97.806672533" Mar 13 01:11:52.176640 master-0 kubenswrapper[3985]: I0313 01:11:52.176451 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:52.176877 master-0 kubenswrapper[3985]: E0313 01:11:52.176728 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:53.177556 master-0 kubenswrapper[3985]: I0313 01:11:53.177406 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:53.178411 master-0 kubenswrapper[3985]: E0313 01:11:53.177778 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:54.177929 master-0 kubenswrapper[3985]: I0313 01:11:54.177190 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:54.179477 master-0 kubenswrapper[3985]: E0313 01:11:54.178122 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:54.189094 master-0 kubenswrapper[3985]: I0313 01:11:54.189006 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 01:11:54.359669 master-0 kubenswrapper[3985]: I0313 01:11:54.359600 3985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hwpff"] Mar 13 01:11:54.883363 master-0 kubenswrapper[3985]: I0313 01:11:54.883291 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerStarted","Data":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} Mar 13 01:11:55.001453 master-0 kubenswrapper[3985]: I0313 01:11:55.001301 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/bootstrap-kube-controller-manager-master-0" podStartSLOduration=1.001271744 podStartE2EDuration="1.001271744s" podCreationTimestamp="2026-03-13 01:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:11:54.953247905 +0000 UTC m=+100.829928219" watchObservedRunningTime="2026-03-13 01:11:55.001271744 +0000 UTC m=+100.877951988" Mar 13 01:11:55.002398 master-0 kubenswrapper[3985]: I0313 01:11:55.002324 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podStartSLOduration=7.255860111 podStartE2EDuration="26.002310346s" podCreationTimestamp="2026-03-13 01:11:29 +0000 UTC" firstStartedPulling="2026-03-13 01:11:29.834811358 +0000 UTC m=+75.711491572" lastFinishedPulling="2026-03-13 01:11:48.581261593 +0000 UTC m=+94.457941807" observedRunningTime="2026-03-13 01:11:55.000999679 +0000 UTC m=+100.877679953" watchObservedRunningTime="2026-03-13 01:11:55.002310346 +0000 UTC m=+100.878990600" Mar 13 01:11:55.177873 master-0 kubenswrapper[3985]: I0313 01:11:55.177649 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:55.178146 master-0 kubenswrapper[3985]: E0313 01:11:55.177902 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888193 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" gracePeriod=30 Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888236 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="northd" containerID="cri-o://4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" gracePeriod=30 Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888205 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="nbdb" containerID="cri-o://349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" gracePeriod=30 Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888415 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-node" containerID="cri-o://8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" gracePeriod=30 Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888438 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="sbdb" containerID="cri-o://594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" gracePeriod=30 Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888362 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888540 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-acl-logging" containerID="cri-o://531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" gracePeriod=30 Mar 13 01:11:55.888645 master-0 kubenswrapper[3985]: I0313 01:11:55.888589 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-controller" containerID="cri-o://995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" gracePeriod=30 Mar 13 01:11:55.889599 master-0 kubenswrapper[3985]: I0313 01:11:55.888729 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:55.889599 master-0 kubenswrapper[3985]: I0313 01:11:55.888776 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:55.902302 master-0 kubenswrapper[3985]: E0313 01:11:55.901628 3985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 13 01:11:55.905226 master-0 kubenswrapper[3985]: E0313 01:11:55.905158 3985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 13 01:11:55.907697 master-0 kubenswrapper[3985]: E0313 01:11:55.907233 3985 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" cmd=["/bin/bash","-c","set -xeo pipefail\n. /ovnkube-lib/ovnkube-lib.sh || exit 1\novndb-readiness-probe \"sb\"\n"] Mar 13 01:11:55.907697 master-0 kubenswrapper[3985]: E0313 01:11:55.907345 3985 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="sbdb" Mar 13 01:11:55.934080 master-0 kubenswrapper[3985]: I0313 01:11:55.934005 3985 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovnkube-controller" containerID="cri-o://80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" gracePeriod=30 Mar 13 01:11:55.942043 master-0 kubenswrapper[3985]: I0313 01:11:55.941988 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:56.176913 master-0 kubenswrapper[3985]: I0313 01:11:56.176796 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:56.177127 master-0 kubenswrapper[3985]: E0313 01:11:56.176958 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:56.269984 master-0 kubenswrapper[3985]: I0313 01:11:56.269904 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/ovnkube-controller/0.log" Mar 13 01:11:56.272089 master-0 kubenswrapper[3985]: I0313 01:11:56.272015 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/kube-rbac-proxy-ovn-metrics/0.log" Mar 13 01:11:56.272684 master-0 kubenswrapper[3985]: I0313 01:11:56.272624 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/kube-rbac-proxy-node/0.log" Mar 13 01:11:56.273290 master-0 kubenswrapper[3985]: I0313 01:11:56.273228 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/ovn-acl-logging/0.log" Mar 13 01:11:56.273935 master-0 kubenswrapper[3985]: I0313 01:11:56.273891 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/ovn-controller/0.log" Mar 13 01:11:56.274589 master-0 kubenswrapper[3985]: I0313 01:11:56.274550 3985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342224 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nlhbx"] Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342367 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovnkube-controller" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342386 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovnkube-controller" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342399 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="sbdb" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342409 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="sbdb" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342420 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kubecfg-setup" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342430 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kubecfg-setup" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342439 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-controller" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342448 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-controller" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342458 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="northd" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342467 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="northd" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342477 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-acl-logging" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342487 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-acl-logging" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342497 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342526 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342538 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="nbdb" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342547 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="nbdb" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: E0313 01:11:56.342556 3985 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-node" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342565 3985 state_mem.go:107] "Deleted CPUSet assignment" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-node" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342613 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-ovn-metrics" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342628 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-controller" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342638 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="sbdb" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342648 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="northd" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342659 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovnkube-controller" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342668 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="nbdb" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342678 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="kube-rbac-proxy-node" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.342687 3985 memory_manager.go:354] "RemoveStaleState removing state" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerName="ovn-acl-logging" Mar 13 01:11:56.346939 master-0 kubenswrapper[3985]: I0313 01:11:56.343656 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.449363 master-0 kubenswrapper[3985]: I0313 01:11:56.449193 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tn9sr\" (UniqueName: \"kubernetes.io/projected/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-kube-api-access-tn9sr\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449363 master-0 kubenswrapper[3985]: I0313 01:11:56.449251 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-script-lib\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449363 master-0 kubenswrapper[3985]: I0313 01:11:56.449295 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-systemd-units\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449363 master-0 kubenswrapper[3985]: I0313 01:11:56.449335 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449448 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-netns\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449464 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-slash\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449479 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-systemd\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449493 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-netd\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449545 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-node-log\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449558 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-etc-openvswitch\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449576 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovn-node-metrics-cert\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449627 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-ovn-kubernetes\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449641 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-kubelet\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449655 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449673 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-openvswitch\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449689 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-log-socket\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449702 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-bin\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449714 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-var-lib-openvswitch\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449731 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-env-overrides\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449737 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449747 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-ovn\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.449820 master-0 kubenswrapper[3985]: I0313 01:11:56.449763 3985 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-config\") pod \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\" (UID: \"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5\") " Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449809 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449830 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449845 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449871 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449886 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449901 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449915 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449932 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449939 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449937 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.449991 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-node-log" (OuterVolumeSpecName: "node-log") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.450008 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.450002 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.450055 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-slash" (OuterVolumeSpecName: "host-slash") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.450082 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.450857 master-0 kubenswrapper[3985]: I0313 01:11:56.450147 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450179 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450210 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450233 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450250 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-log-socket" (OuterVolumeSpecName: "log-socket") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450267 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450282 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.449967 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450321 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n58nf\" (UniqueName: \"kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450343 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450365 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450384 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450406 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450430 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450454 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.451780 master-0 kubenswrapper[3985]: I0313 01:11:56.450468 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450486 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450500 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450532 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450561 3985 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-netd\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450570 3985 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-node-log\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450579 3985 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-etc-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450646 3985 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450699 3985 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-kubelet\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450744 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450772 3985 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-var-lib-cni-networks-ovn-kubernetes\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450803 3985 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450822 3985 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-log-socket\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450842 3985 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-cni-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450860 3985 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-var-lib-openvswitch\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450882 3985 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-env-overrides\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450901 3985 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-ovn\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450920 3985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-script-lib\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450938 3985 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-systemd-units\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450955 3985 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-run-netns\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.452777 master-0 kubenswrapper[3985]: I0313 01:11:56.450972 3985 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-host-slash\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.454029 master-0 kubenswrapper[3985]: I0313 01:11:56.453985 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-kube-api-access-tn9sr" (OuterVolumeSpecName: "kube-api-access-tn9sr") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "kube-api-access-tn9sr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:11:56.454570 master-0 kubenswrapper[3985]: I0313 01:11:56.454503 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:11:56.455659 master-0 kubenswrapper[3985]: I0313 01:11:56.455615 3985 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" (UID: "98a84646-2b22-45a4-9fd4-d41ea0a9b6d5"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:11:56.551471 master-0 kubenswrapper[3985]: I0313 01:11:56.551371 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.551735 master-0 kubenswrapper[3985]: I0313 01:11:56.551562 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.551735 master-0 kubenswrapper[3985]: I0313 01:11:56.551615 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.551735 master-0 kubenswrapper[3985]: I0313 01:11:56.551695 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58nf\" (UniqueName: \"kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.551955 master-0 kubenswrapper[3985]: I0313 01:11:56.551747 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552017 master-0 kubenswrapper[3985]: I0313 01:11:56.551975 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552087 master-0 kubenswrapper[3985]: I0313 01:11:56.552054 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552149 master-0 kubenswrapper[3985]: I0313 01:11:56.552100 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552335 master-0 kubenswrapper[3985]: I0313 01:11:56.552233 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552416 master-0 kubenswrapper[3985]: I0313 01:11:56.552322 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552416 master-0 kubenswrapper[3985]: I0313 01:11:56.552397 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552416 master-0 kubenswrapper[3985]: I0313 01:11:56.552398 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552436 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552474 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552496 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552559 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552603 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552641 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552531 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552682 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.552640 master-0 kubenswrapper[3985]: I0313 01:11:56.552739 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.552767 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.552803 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.552863 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.552891 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.552922 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.552967 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.552984 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553029 3985 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovnkube-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553062 3985 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tn9sr\" (UniqueName: \"kubernetes.io/projected/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-kube-api-access-tn9sr\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553095 3985 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-run-systemd\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553109 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553123 3985 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5-ovn-node-metrics-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553162 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553226 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553250 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553257 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.553245 master-0 kubenswrapper[3985]: I0313 01:11:56.553274 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.555139 master-0 kubenswrapper[3985]: I0313 01:11:56.553320 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.555139 master-0 kubenswrapper[3985]: I0313 01:11:56.553402 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.555139 master-0 kubenswrapper[3985]: I0313 01:11:56.554800 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.555139 master-0 kubenswrapper[3985]: I0313 01:11:56.555020 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.555139 master-0 kubenswrapper[3985]: I0313 01:11:56.555055 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.568737 master-0 kubenswrapper[3985]: I0313 01:11:56.568651 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58nf\" (UniqueName: \"kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.661574 master-0 kubenswrapper[3985]: I0313 01:11:56.661413 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:11:56.679445 master-0 kubenswrapper[3985]: W0313 01:11:56.679336 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49a28ab7_1176_4213_b037_19fe18bbe57b.slice/crio-09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19 WatchSource:0}: Error finding container 09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19: Status 404 returned error can't find the container with id 09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19 Mar 13 01:11:56.894196 master-0 kubenswrapper[3985]: I0313 01:11:56.894127 3985 generic.go:334] "Generic (PLEG): container finished" podID="49a28ab7-1176-4213-b037-19fe18bbe57b" containerID="84a75bf6c5b0aae138001278a5abd61d9c21955abcbf0e21925aa4e975040741" exitCode=0 Mar 13 01:11:56.894565 master-0 kubenswrapper[3985]: I0313 01:11:56.894255 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerDied","Data":"84a75bf6c5b0aae138001278a5abd61d9c21955abcbf0e21925aa4e975040741"} Mar 13 01:11:56.894565 master-0 kubenswrapper[3985]: I0313 01:11:56.894300 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19"} Mar 13 01:11:56.896949 master-0 kubenswrapper[3985]: I0313 01:11:56.896899 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/ovnkube-controller/0.log" Mar 13 01:11:56.903285 master-0 kubenswrapper[3985]: I0313 01:11:56.903244 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/kube-rbac-proxy-ovn-metrics/0.log" Mar 13 01:11:56.904204 master-0 kubenswrapper[3985]: I0313 01:11:56.904154 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/kube-rbac-proxy-node/0.log" Mar 13 01:11:56.905004 master-0 kubenswrapper[3985]: I0313 01:11:56.904956 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/ovn-acl-logging/0.log" Mar 13 01:11:56.905680 master-0 kubenswrapper[3985]: I0313 01:11:56.905641 3985 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-hwpff_98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/ovn-controller/0.log" Mar 13 01:11:56.906309 master-0 kubenswrapper[3985]: I0313 01:11:56.906257 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" exitCode=2 Mar 13 01:11:56.906309 master-0 kubenswrapper[3985]: I0313 01:11:56.906300 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" exitCode=0 Mar 13 01:11:56.906401 master-0 kubenswrapper[3985]: I0313 01:11:56.906320 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" exitCode=0 Mar 13 01:11:56.906401 master-0 kubenswrapper[3985]: I0313 01:11:56.906339 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" exitCode=0 Mar 13 01:11:56.906401 master-0 kubenswrapper[3985]: I0313 01:11:56.906355 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" exitCode=143 Mar 13 01:11:56.906401 master-0 kubenswrapper[3985]: I0313 01:11:56.906376 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" exitCode=143 Mar 13 01:11:56.906401 master-0 kubenswrapper[3985]: I0313 01:11:56.906400 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" exitCode=143 Mar 13 01:11:56.906641 master-0 kubenswrapper[3985]: I0313 01:11:56.906415 3985 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" Mar 13 01:11:56.906641 master-0 kubenswrapper[3985]: I0313 01:11:56.906374 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} Mar 13 01:11:56.906641 master-0 kubenswrapper[3985]: I0313 01:11:56.906498 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} Mar 13 01:11:56.906641 master-0 kubenswrapper[3985]: I0313 01:11:56.906554 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} Mar 13 01:11:56.906641 master-0 kubenswrapper[3985]: I0313 01:11:56.906575 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} Mar 13 01:11:56.906641 master-0 kubenswrapper[3985]: I0313 01:11:56.906593 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} Mar 13 01:11:56.906641 master-0 kubenswrapper[3985]: I0313 01:11:56.906631 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} Mar 13 01:11:56.906923 master-0 kubenswrapper[3985]: I0313 01:11:56.906640 3985 scope.go:117] "RemoveContainer" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" Mar 13 01:11:56.906923 master-0 kubenswrapper[3985]: I0313 01:11:56.906417 3985 generic.go:334] "Generic (PLEG): container finished" podID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" containerID="995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" exitCode=143 Mar 13 01:11:56.907065 master-0 kubenswrapper[3985]: I0313 01:11:56.906649 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} Mar 13 01:11:56.907065 master-0 kubenswrapper[3985]: I0313 01:11:56.907059 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} Mar 13 01:11:56.907151 master-0 kubenswrapper[3985]: I0313 01:11:56.907069 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} Mar 13 01:11:56.907151 master-0 kubenswrapper[3985]: I0313 01:11:56.907082 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} Mar 13 01:11:56.907151 master-0 kubenswrapper[3985]: I0313 01:11:56.907125 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} Mar 13 01:11:56.907151 master-0 kubenswrapper[3985]: I0313 01:11:56.907137 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} Mar 13 01:11:56.907151 master-0 kubenswrapper[3985]: I0313 01:11:56.907146 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907155 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907163 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907172 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907205 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907213 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907221 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907232 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907245 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907278 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907288 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907295 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907306 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907314 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907322 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} Mar 13 01:11:56.907338 master-0 kubenswrapper[3985]: I0313 01:11:56.907330 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907360 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907372 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-hwpff" event={"ID":"98a84646-2b22-45a4-9fd4-d41ea0a9b6d5","Type":"ContainerDied","Data":"4d381865a624bc04fbd2468a95ed0546e2a4ca37142c78f395f943846511aab8"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907383 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907392 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907400 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907407 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907415 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907446 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907454 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907461 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} Mar 13 01:11:56.907829 master-0 kubenswrapper[3985]: I0313 01:11:56.907469 3985 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} Mar 13 01:11:56.943016 master-0 kubenswrapper[3985]: I0313 01:11:56.942968 3985 scope.go:117] "RemoveContainer" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" Mar 13 01:11:56.961081 master-0 kubenswrapper[3985]: I0313 01:11:56.960461 3985 scope.go:117] "RemoveContainer" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" Mar 13 01:11:56.976702 master-0 kubenswrapper[3985]: I0313 01:11:56.976504 3985 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hwpff"] Mar 13 01:11:56.980443 master-0 kubenswrapper[3985]: I0313 01:11:56.979852 3985 scope.go:117] "RemoveContainer" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" Mar 13 01:11:56.985320 master-0 kubenswrapper[3985]: I0313 01:11:56.985273 3985 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-hwpff"] Mar 13 01:11:56.991549 master-0 kubenswrapper[3985]: I0313 01:11:56.991488 3985 scope.go:117] "RemoveContainer" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" Mar 13 01:11:57.001775 master-0 kubenswrapper[3985]: I0313 01:11:57.001724 3985 scope.go:117] "RemoveContainer" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" Mar 13 01:11:57.011801 master-0 kubenswrapper[3985]: I0313 01:11:57.011759 3985 scope.go:117] "RemoveContainer" containerID="531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" Mar 13 01:11:57.026238 master-0 kubenswrapper[3985]: I0313 01:11:57.026164 3985 scope.go:117] "RemoveContainer" containerID="995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" Mar 13 01:11:57.040444 master-0 kubenswrapper[3985]: I0313 01:11:57.040398 3985 scope.go:117] "RemoveContainer" containerID="5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd" Mar 13 01:11:57.069375 master-0 kubenswrapper[3985]: I0313 01:11:57.069319 3985 scope.go:117] "RemoveContainer" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" Mar 13 01:11:57.070026 master-0 kubenswrapper[3985]: E0313 01:11:57.069976 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": container with ID starting with 80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d not found: ID does not exist" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" Mar 13 01:11:57.070088 master-0 kubenswrapper[3985]: I0313 01:11:57.070029 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} err="failed to get container status \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": rpc error: code = NotFound desc = could not find container \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": container with ID starting with 80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d not found: ID does not exist" Mar 13 01:11:57.070088 master-0 kubenswrapper[3985]: I0313 01:11:57.070059 3985 scope.go:117] "RemoveContainer" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" Mar 13 01:11:57.070760 master-0 kubenswrapper[3985]: E0313 01:11:57.070711 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": container with ID starting with 594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162 not found: ID does not exist" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" Mar 13 01:11:57.070842 master-0 kubenswrapper[3985]: I0313 01:11:57.070769 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} err="failed to get container status \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": rpc error: code = NotFound desc = could not find container \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": container with ID starting with 594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162 not found: ID does not exist" Mar 13 01:11:57.070842 master-0 kubenswrapper[3985]: I0313 01:11:57.070814 3985 scope.go:117] "RemoveContainer" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" Mar 13 01:11:57.071682 master-0 kubenswrapper[3985]: E0313 01:11:57.071640 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": container with ID starting with 349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33 not found: ID does not exist" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" Mar 13 01:11:57.071749 master-0 kubenswrapper[3985]: I0313 01:11:57.071684 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} err="failed to get container status \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": rpc error: code = NotFound desc = could not find container \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": container with ID starting with 349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33 not found: ID does not exist" Mar 13 01:11:57.071749 master-0 kubenswrapper[3985]: I0313 01:11:57.071715 3985 scope.go:117] "RemoveContainer" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" Mar 13 01:11:57.072056 master-0 kubenswrapper[3985]: E0313 01:11:57.072017 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": container with ID starting with 4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8 not found: ID does not exist" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" Mar 13 01:11:57.072056 master-0 kubenswrapper[3985]: I0313 01:11:57.072048 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} err="failed to get container status \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": rpc error: code = NotFound desc = could not find container \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": container with ID starting with 4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8 not found: ID does not exist" Mar 13 01:11:57.072146 master-0 kubenswrapper[3985]: I0313 01:11:57.072065 3985 scope.go:117] "RemoveContainer" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" Mar 13 01:11:57.072561 master-0 kubenswrapper[3985]: E0313 01:11:57.072492 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": container with ID starting with 7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7 not found: ID does not exist" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" Mar 13 01:11:57.072623 master-0 kubenswrapper[3985]: I0313 01:11:57.072565 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} err="failed to get container status \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": rpc error: code = NotFound desc = could not find container \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": container with ID starting with 7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7 not found: ID does not exist" Mar 13 01:11:57.072623 master-0 kubenswrapper[3985]: I0313 01:11:57.072591 3985 scope.go:117] "RemoveContainer" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" Mar 13 01:11:57.073181 master-0 kubenswrapper[3985]: E0313 01:11:57.073123 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": container with ID starting with 8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59 not found: ID does not exist" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" Mar 13 01:11:57.073260 master-0 kubenswrapper[3985]: I0313 01:11:57.073181 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} err="failed to get container status \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": rpc error: code = NotFound desc = could not find container \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": container with ID starting with 8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59 not found: ID does not exist" Mar 13 01:11:57.073260 master-0 kubenswrapper[3985]: I0313 01:11:57.073229 3985 scope.go:117] "RemoveContainer" containerID="531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" Mar 13 01:11:57.073633 master-0 kubenswrapper[3985]: E0313 01:11:57.073595 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": container with ID starting with 531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c not found: ID does not exist" containerID="531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" Mar 13 01:11:57.073633 master-0 kubenswrapper[3985]: I0313 01:11:57.073620 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} err="failed to get container status \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": rpc error: code = NotFound desc = could not find container \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": container with ID starting with 531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c not found: ID does not exist" Mar 13 01:11:57.073633 master-0 kubenswrapper[3985]: I0313 01:11:57.073635 3985 scope.go:117] "RemoveContainer" containerID="995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" Mar 13 01:11:57.074007 master-0 kubenswrapper[3985]: E0313 01:11:57.073959 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": container with ID starting with 995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6 not found: ID does not exist" containerID="995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" Mar 13 01:11:57.074007 master-0 kubenswrapper[3985]: I0313 01:11:57.073993 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} err="failed to get container status \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": rpc error: code = NotFound desc = could not find container \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": container with ID starting with 995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6 not found: ID does not exist" Mar 13 01:11:57.074138 master-0 kubenswrapper[3985]: I0313 01:11:57.074012 3985 scope.go:117] "RemoveContainer" containerID="5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd" Mar 13 01:11:57.074707 master-0 kubenswrapper[3985]: E0313 01:11:57.074661 3985 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": container with ID starting with 5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd not found: ID does not exist" containerID="5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd" Mar 13 01:11:57.074771 master-0 kubenswrapper[3985]: I0313 01:11:57.074709 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} err="failed to get container status \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": rpc error: code = NotFound desc = could not find container \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": container with ID starting with 5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd not found: ID does not exist" Mar 13 01:11:57.074771 master-0 kubenswrapper[3985]: I0313 01:11:57.074737 3985 scope.go:117] "RemoveContainer" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" Mar 13 01:11:57.075158 master-0 kubenswrapper[3985]: I0313 01:11:57.075132 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} err="failed to get container status \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": rpc error: code = NotFound desc = could not find container \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": container with ID starting with 80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d not found: ID does not exist" Mar 13 01:11:57.075158 master-0 kubenswrapper[3985]: I0313 01:11:57.075151 3985 scope.go:117] "RemoveContainer" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" Mar 13 01:11:57.075598 master-0 kubenswrapper[3985]: I0313 01:11:57.075563 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} err="failed to get container status \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": rpc error: code = NotFound desc = could not find container \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": container with ID starting with 594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162 not found: ID does not exist" Mar 13 01:11:57.075598 master-0 kubenswrapper[3985]: I0313 01:11:57.075588 3985 scope.go:117] "RemoveContainer" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" Mar 13 01:11:57.075927 master-0 kubenswrapper[3985]: I0313 01:11:57.075904 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} err="failed to get container status \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": rpc error: code = NotFound desc = could not find container \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": container with ID starting with 349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33 not found: ID does not exist" Mar 13 01:11:57.075927 master-0 kubenswrapper[3985]: I0313 01:11:57.075923 3985 scope.go:117] "RemoveContainer" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" Mar 13 01:11:57.076298 master-0 kubenswrapper[3985]: I0313 01:11:57.076268 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} err="failed to get container status \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": rpc error: code = NotFound desc = could not find container \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": container with ID starting with 4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8 not found: ID does not exist" Mar 13 01:11:57.076298 master-0 kubenswrapper[3985]: I0313 01:11:57.076286 3985 scope.go:117] "RemoveContainer" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" Mar 13 01:11:57.076685 master-0 kubenswrapper[3985]: I0313 01:11:57.076630 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} err="failed to get container status \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": rpc error: code = NotFound desc = could not find container \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": container with ID starting with 7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7 not found: ID does not exist" Mar 13 01:11:57.076685 master-0 kubenswrapper[3985]: I0313 01:11:57.076669 3985 scope.go:117] "RemoveContainer" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" Mar 13 01:11:57.077039 master-0 kubenswrapper[3985]: I0313 01:11:57.077012 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} err="failed to get container status \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": rpc error: code = NotFound desc = could not find container \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": container with ID starting with 8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59 not found: ID does not exist" Mar 13 01:11:57.077039 master-0 kubenswrapper[3985]: I0313 01:11:57.077034 3985 scope.go:117] "RemoveContainer" containerID="531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" Mar 13 01:11:57.077486 master-0 kubenswrapper[3985]: I0313 01:11:57.077451 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} err="failed to get container status \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": rpc error: code = NotFound desc = could not find container \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": container with ID starting with 531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c not found: ID does not exist" Mar 13 01:11:57.077486 master-0 kubenswrapper[3985]: I0313 01:11:57.077480 3985 scope.go:117] "RemoveContainer" containerID="995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" Mar 13 01:11:57.077922 master-0 kubenswrapper[3985]: I0313 01:11:57.077885 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} err="failed to get container status \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": rpc error: code = NotFound desc = could not find container \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": container with ID starting with 995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6 not found: ID does not exist" Mar 13 01:11:57.077983 master-0 kubenswrapper[3985]: I0313 01:11:57.077922 3985 scope.go:117] "RemoveContainer" containerID="5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd" Mar 13 01:11:57.078254 master-0 kubenswrapper[3985]: I0313 01:11:57.078226 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} err="failed to get container status \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": rpc error: code = NotFound desc = could not find container \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": container with ID starting with 5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd not found: ID does not exist" Mar 13 01:11:57.078254 master-0 kubenswrapper[3985]: I0313 01:11:57.078248 3985 scope.go:117] "RemoveContainer" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" Mar 13 01:11:57.078649 master-0 kubenswrapper[3985]: I0313 01:11:57.078619 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} err="failed to get container status \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": rpc error: code = NotFound desc = could not find container \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": container with ID starting with 80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d not found: ID does not exist" Mar 13 01:11:57.078649 master-0 kubenswrapper[3985]: I0313 01:11:57.078645 3985 scope.go:117] "RemoveContainer" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" Mar 13 01:11:57.079053 master-0 kubenswrapper[3985]: I0313 01:11:57.079025 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} err="failed to get container status \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": rpc error: code = NotFound desc = could not find container \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": container with ID starting with 594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162 not found: ID does not exist" Mar 13 01:11:57.079118 master-0 kubenswrapper[3985]: I0313 01:11:57.079052 3985 scope.go:117] "RemoveContainer" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" Mar 13 01:11:57.079414 master-0 kubenswrapper[3985]: I0313 01:11:57.079390 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} err="failed to get container status \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": rpc error: code = NotFound desc = could not find container \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": container with ID starting with 349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33 not found: ID does not exist" Mar 13 01:11:57.079414 master-0 kubenswrapper[3985]: I0313 01:11:57.079410 3985 scope.go:117] "RemoveContainer" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" Mar 13 01:11:57.079824 master-0 kubenswrapper[3985]: I0313 01:11:57.079783 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} err="failed to get container status \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": rpc error: code = NotFound desc = could not find container \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": container with ID starting with 4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8 not found: ID does not exist" Mar 13 01:11:57.079824 master-0 kubenswrapper[3985]: I0313 01:11:57.079821 3985 scope.go:117] "RemoveContainer" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" Mar 13 01:11:57.080190 master-0 kubenswrapper[3985]: I0313 01:11:57.080163 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} err="failed to get container status \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": rpc error: code = NotFound desc = could not find container \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": container with ID starting with 7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7 not found: ID does not exist" Mar 13 01:11:57.080190 master-0 kubenswrapper[3985]: I0313 01:11:57.080188 3985 scope.go:117] "RemoveContainer" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" Mar 13 01:11:57.080633 master-0 kubenswrapper[3985]: I0313 01:11:57.080581 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} err="failed to get container status \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": rpc error: code = NotFound desc = could not find container \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": container with ID starting with 8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59 not found: ID does not exist" Mar 13 01:11:57.080633 master-0 kubenswrapper[3985]: I0313 01:11:57.080622 3985 scope.go:117] "RemoveContainer" containerID="531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" Mar 13 01:11:57.081626 master-0 kubenswrapper[3985]: I0313 01:11:57.081584 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} err="failed to get container status \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": rpc error: code = NotFound desc = could not find container \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": container with ID starting with 531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c not found: ID does not exist" Mar 13 01:11:57.081626 master-0 kubenswrapper[3985]: I0313 01:11:57.081615 3985 scope.go:117] "RemoveContainer" containerID="995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" Mar 13 01:11:57.082099 master-0 kubenswrapper[3985]: I0313 01:11:57.082061 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} err="failed to get container status \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": rpc error: code = NotFound desc = could not find container \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": container with ID starting with 995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6 not found: ID does not exist" Mar 13 01:11:57.082099 master-0 kubenswrapper[3985]: I0313 01:11:57.082087 3985 scope.go:117] "RemoveContainer" containerID="5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd" Mar 13 01:11:57.082426 master-0 kubenswrapper[3985]: I0313 01:11:57.082389 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} err="failed to get container status \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": rpc error: code = NotFound desc = could not find container \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": container with ID starting with 5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd not found: ID does not exist" Mar 13 01:11:57.082426 master-0 kubenswrapper[3985]: I0313 01:11:57.082414 3985 scope.go:117] "RemoveContainer" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" Mar 13 01:11:57.082915 master-0 kubenswrapper[3985]: I0313 01:11:57.082874 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} err="failed to get container status \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": rpc error: code = NotFound desc = could not find container \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": container with ID starting with 80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d not found: ID does not exist" Mar 13 01:11:57.082915 master-0 kubenswrapper[3985]: I0313 01:11:57.082897 3985 scope.go:117] "RemoveContainer" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" Mar 13 01:11:57.083172 master-0 kubenswrapper[3985]: I0313 01:11:57.083140 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} err="failed to get container status \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": rpc error: code = NotFound desc = could not find container \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": container with ID starting with 594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162 not found: ID does not exist" Mar 13 01:11:57.083172 master-0 kubenswrapper[3985]: I0313 01:11:57.083159 3985 scope.go:117] "RemoveContainer" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" Mar 13 01:11:57.083462 master-0 kubenswrapper[3985]: I0313 01:11:57.083409 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} err="failed to get container status \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": rpc error: code = NotFound desc = could not find container \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": container with ID starting with 349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33 not found: ID does not exist" Mar 13 01:11:57.083462 master-0 kubenswrapper[3985]: I0313 01:11:57.083456 3985 scope.go:117] "RemoveContainer" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" Mar 13 01:11:57.083822 master-0 kubenswrapper[3985]: I0313 01:11:57.083784 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} err="failed to get container status \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": rpc error: code = NotFound desc = could not find container \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": container with ID starting with 4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8 not found: ID does not exist" Mar 13 01:11:57.083822 master-0 kubenswrapper[3985]: I0313 01:11:57.083810 3985 scope.go:117] "RemoveContainer" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" Mar 13 01:11:57.084333 master-0 kubenswrapper[3985]: I0313 01:11:57.084285 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} err="failed to get container status \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": rpc error: code = NotFound desc = could not find container \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": container with ID starting with 7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7 not found: ID does not exist" Mar 13 01:11:57.084333 master-0 kubenswrapper[3985]: I0313 01:11:57.084325 3985 scope.go:117] "RemoveContainer" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" Mar 13 01:11:57.084742 master-0 kubenswrapper[3985]: I0313 01:11:57.084692 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} err="failed to get container status \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": rpc error: code = NotFound desc = could not find container \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": container with ID starting with 8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59 not found: ID does not exist" Mar 13 01:11:57.084742 master-0 kubenswrapper[3985]: I0313 01:11:57.084734 3985 scope.go:117] "RemoveContainer" containerID="531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c" Mar 13 01:11:57.085054 master-0 kubenswrapper[3985]: I0313 01:11:57.085023 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c"} err="failed to get container status \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": rpc error: code = NotFound desc = could not find container \"531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c\": container with ID starting with 531bcf553d16d1840bffac48921c1e67e357115488d3e2e76a7c351514af7d0c not found: ID does not exist" Mar 13 01:11:57.085054 master-0 kubenswrapper[3985]: I0313 01:11:57.085041 3985 scope.go:117] "RemoveContainer" containerID="995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6" Mar 13 01:11:57.085534 master-0 kubenswrapper[3985]: I0313 01:11:57.085468 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6"} err="failed to get container status \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": rpc error: code = NotFound desc = could not find container \"995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6\": container with ID starting with 995fefbe59db07dbe1c6b21a178e747ac978ad0f0df0ad33fdea8d5089f9c8a6 not found: ID does not exist" Mar 13 01:11:57.085534 master-0 kubenswrapper[3985]: I0313 01:11:57.085528 3985 scope.go:117] "RemoveContainer" containerID="5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd" Mar 13 01:11:57.085956 master-0 kubenswrapper[3985]: I0313 01:11:57.085907 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd"} err="failed to get container status \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": rpc error: code = NotFound desc = could not find container \"5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd\": container with ID starting with 5ad64c245a6d72a3511caa03e024555ae4739cbd3138ee0547c90e03ba3623dd not found: ID does not exist" Mar 13 01:11:57.086004 master-0 kubenswrapper[3985]: I0313 01:11:57.085962 3985 scope.go:117] "RemoveContainer" containerID="80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d" Mar 13 01:11:57.086620 master-0 kubenswrapper[3985]: I0313 01:11:57.086582 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d"} err="failed to get container status \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": rpc error: code = NotFound desc = could not find container \"80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d\": container with ID starting with 80444ac1b0be208dc2e6758dda209248f8f3099173f295a8b0dc32ecd665173d not found: ID does not exist" Mar 13 01:11:57.086620 master-0 kubenswrapper[3985]: I0313 01:11:57.086612 3985 scope.go:117] "RemoveContainer" containerID="594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162" Mar 13 01:11:57.087920 master-0 kubenswrapper[3985]: I0313 01:11:57.087883 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162"} err="failed to get container status \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": rpc error: code = NotFound desc = could not find container \"594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162\": container with ID starting with 594f61f51068af079e954e491bc18ec940ee3ac1088e04be4b2bc7c6137a3162 not found: ID does not exist" Mar 13 01:11:57.087920 master-0 kubenswrapper[3985]: I0313 01:11:57.087909 3985 scope.go:117] "RemoveContainer" containerID="349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33" Mar 13 01:11:57.088327 master-0 kubenswrapper[3985]: I0313 01:11:57.088286 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33"} err="failed to get container status \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": rpc error: code = NotFound desc = could not find container \"349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33\": container with ID starting with 349626e8db13904bd3c20d06b81cb0110a8f8b87243f9587720bbb5d6cbb6e33 not found: ID does not exist" Mar 13 01:11:57.088327 master-0 kubenswrapper[3985]: I0313 01:11:57.088315 3985 scope.go:117] "RemoveContainer" containerID="4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8" Mar 13 01:11:57.088588 master-0 kubenswrapper[3985]: I0313 01:11:57.088548 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8"} err="failed to get container status \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": rpc error: code = NotFound desc = could not find container \"4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8\": container with ID starting with 4a6899fe25f5c3a5fbf08bd01fb521ace13c679874724f26097882ecda3e34c8 not found: ID does not exist" Mar 13 01:11:57.088588 master-0 kubenswrapper[3985]: I0313 01:11:57.088574 3985 scope.go:117] "RemoveContainer" containerID="7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7" Mar 13 01:11:57.088859 master-0 kubenswrapper[3985]: I0313 01:11:57.088819 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7"} err="failed to get container status \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": rpc error: code = NotFound desc = could not find container \"7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7\": container with ID starting with 7f974bd8a773dc0e475f157aabdb9f515ad3dec9653af6f63f34071a1417cbe7 not found: ID does not exist" Mar 13 01:11:57.088859 master-0 kubenswrapper[3985]: I0313 01:11:57.088848 3985 scope.go:117] "RemoveContainer" containerID="8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59" Mar 13 01:11:57.089156 master-0 kubenswrapper[3985]: I0313 01:11:57.089111 3985 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59"} err="failed to get container status \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": rpc error: code = NotFound desc = could not find container \"8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59\": container with ID starting with 8a4079e7d77a4d2c2d328aa381024542199c0558158d2ef7e5f90c8843d14f59 not found: ID does not exist" Mar 13 01:11:57.176794 master-0 kubenswrapper[3985]: I0313 01:11:57.176532 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:57.176794 master-0 kubenswrapper[3985]: E0313 01:11:57.176735 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:57.184624 master-0 kubenswrapper[3985]: I0313 01:11:57.184581 3985 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98a84646-2b22-45a4-9fd4-d41ea0a9b6d5" path="/var/lib/kubelet/pods/98a84646-2b22-45a4-9fd4-d41ea0a9b6d5/volumes" Mar 13 01:11:57.190247 master-0 kubenswrapper[3985]: I0313 01:11:57.190205 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 01:11:57.924399 master-0 kubenswrapper[3985]: I0313 01:11:57.924300 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"e0c138a2f14690cec28d63719eb2cdbed60b3bd26214b7360047774a4db1b690"} Mar 13 01:11:57.924399 master-0 kubenswrapper[3985]: I0313 01:11:57.924375 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"34101842a5b0c1ab502c3b4b2fdb7211eb950002b7acf03b1517d9a70cf63171"} Mar 13 01:11:57.924399 master-0 kubenswrapper[3985]: I0313 01:11:57.924396 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"2c2ca576d566d478f04bb32afd960c4bb804d749381e06060f4bad95e147aaa7"} Mar 13 01:11:57.924399 master-0 kubenswrapper[3985]: I0313 01:11:57.924415 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"9c7587f59e76a74a199d2d8d932061b6d2cbdc1bcdcdc13912a407d4b1bba540"} Mar 13 01:11:57.924399 master-0 kubenswrapper[3985]: I0313 01:11:57.924437 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"8eb748b9ce2da44f88abc974c1fa335342aeae42a1782809ec5c88a127d3eb2d"} Mar 13 01:11:57.926288 master-0 kubenswrapper[3985]: I0313 01:11:57.924455 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"6516276cffc38b95b4fbfebbfbf2303fa54c7555e186c2d6ade1e5e8a5ff6541"} Mar 13 01:11:58.178098 master-0 kubenswrapper[3985]: I0313 01:11:58.177208 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:11:58.178098 master-0 kubenswrapper[3985]: E0313 01:11:58.177765 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:11:59.177103 master-0 kubenswrapper[3985]: I0313 01:11:59.177021 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:11:59.178261 master-0 kubenswrapper[3985]: E0313 01:11:59.177309 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:11:59.941193 master-0 kubenswrapper[3985]: I0313 01:11:59.941016 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"733541f4b2feb4f924e9eaf7e29adc9765f090b3a0941af79e9c6984a3cf1194"} Mar 13 01:12:00.176843 master-0 kubenswrapper[3985]: I0313 01:12:00.176710 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:00.177142 master-0 kubenswrapper[3985]: E0313 01:12:00.176911 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:12:01.177278 master-0 kubenswrapper[3985]: I0313 01:12:01.177206 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:01.178087 master-0 kubenswrapper[3985]: E0313 01:12:01.177456 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:12:01.959887 master-0 kubenswrapper[3985]: I0313 01:12:01.959798 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" event={"ID":"49a28ab7-1176-4213-b037-19fe18bbe57b","Type":"ContainerStarted","Data":"536f163b9236695e30f567b9511e440f8d2acb5e850935b653ae0705c8bd1d8d"} Mar 13 01:12:01.960345 master-0 kubenswrapper[3985]: I0313 01:12:01.960218 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:01.980294 master-0 kubenswrapper[3985]: I0313 01:12:01.977590 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podStartSLOduration=4.977564981 podStartE2EDuration="4.977564981s" podCreationTimestamp="2026-03-13 01:11:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:01.976898436 +0000 UTC m=+107.853578680" watchObservedRunningTime="2026-03-13 01:12:01.977564981 +0000 UTC m=+107.854245195" Mar 13 01:12:02.001141 master-0 kubenswrapper[3985]: I0313 01:12:02.000254 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:02.017112 master-0 kubenswrapper[3985]: I0313 01:12:02.015733 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:02.017112 master-0 kubenswrapper[3985]: E0313 01:12:02.016636 3985 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:02.017112 master-0 kubenswrapper[3985]: E0313 01:12:02.016700 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:13:06.016681752 +0000 UTC m=+171.893361966 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:02.046783 master-0 kubenswrapper[3985]: I0313 01:12:02.046451 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" podStartSLOduration=6.046416568 podStartE2EDuration="6.046416568s" podCreationTimestamp="2026-03-13 01:11:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:02.01515258 +0000 UTC m=+107.891832824" watchObservedRunningTime="2026-03-13 01:12:02.046416568 +0000 UTC m=+107.923096822" Mar 13 01:12:02.176778 master-0 kubenswrapper[3985]: I0313 01:12:02.176637 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:02.177076 master-0 kubenswrapper[3985]: E0313 01:12:02.176985 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:12:02.849160 master-0 kubenswrapper[3985]: I0313 01:12:02.849023 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-49pfj"] Mar 13 01:12:02.852242 master-0 kubenswrapper[3985]: I0313 01:12:02.851689 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9hwz9"] Mar 13 01:12:02.852242 master-0 kubenswrapper[3985]: I0313 01:12:02.851810 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:02.852242 master-0 kubenswrapper[3985]: E0313 01:12:02.851907 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:12:02.963350 master-0 kubenswrapper[3985]: I0313 01:12:02.963143 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:02.963350 master-0 kubenswrapper[3985]: E0313 01:12:02.963263 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:12:02.964168 master-0 kubenswrapper[3985]: I0313 01:12:02.964096 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:02.964249 master-0 kubenswrapper[3985]: I0313 01:12:02.964175 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:02.987385 master-0 kubenswrapper[3985]: I0313 01:12:02.987305 3985 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:04.177367 master-0 kubenswrapper[3985]: I0313 01:12:04.177269 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:04.178406 master-0 kubenswrapper[3985]: E0313 01:12:04.177478 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:12:04.944316 master-0 kubenswrapper[3985]: I0313 01:12:04.943960 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:04.944316 master-0 kubenswrapper[3985]: E0313 01:12:04.944298 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Mar 13 01:12:04.944316 master-0 kubenswrapper[3985]: E0313 01:12:04.944405 3985 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Mar 13 01:12:04.944316 master-0 kubenswrapper[3985]: E0313 01:12:04.944438 3985 projected.go:194] Error preparing data for projected volume kube-api-access-tlgsr for pod openshift-network-diagnostics/network-check-target-49pfj: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:12:04.945118 master-0 kubenswrapper[3985]: E0313 01:12:04.944629 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr podName:34889110-f282-4c2c-a2b0-620033559e1b nodeName:}" failed. No retries permitted until 2026-03-13 01:12:36.944588042 +0000 UTC m=+142.821268376 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-tlgsr" (UniqueName: "kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr") pod "network-check-target-49pfj" (UID: "34889110-f282-4c2c-a2b0-620033559e1b") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Mar 13 01:12:05.177141 master-0 kubenswrapper[3985]: I0313 01:12:05.177060 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:05.178953 master-0 kubenswrapper[3985]: E0313 01:12:05.178866 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:12:06.176924 master-0 kubenswrapper[3985]: I0313 01:12:06.176827 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:06.177420 master-0 kubenswrapper[3985]: E0313 01:12:06.177065 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-49pfj" podUID="34889110-f282-4c2c-a2b0-620033559e1b" Mar 13 01:12:07.177129 master-0 kubenswrapper[3985]: I0313 01:12:07.176729 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:07.177129 master-0 kubenswrapper[3985]: E0313 01:12:07.176918 3985 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-9hwz9" podUID="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" Mar 13 01:12:07.927331 master-0 kubenswrapper[3985]: I0313 01:12:07.927270 3985 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeReady" Mar 13 01:12:07.927958 master-0 kubenswrapper[3985]: I0313 01:12:07.927933 3985 kubelet_node_status.go:538] "Fast updating node status as it just became ready" Mar 13 01:12:07.976613 master-0 kubenswrapper[3985]: I0313 01:12:07.976165 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g"] Mar 13 01:12:07.977944 master-0 kubenswrapper[3985]: I0313 01:12:07.977914 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:07.991023 master-0 kubenswrapper[3985]: I0313 01:12:07.990453 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 01:12:07.991023 master-0 kubenswrapper[3985]: I0313 01:12:07.990912 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 01:12:07.991023 master-0 kubenswrapper[3985]: I0313 01:12:07.990929 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 01:12:07.991423 master-0 kubenswrapper[3985]: I0313 01:12:07.991270 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 01:12:07.995723 master-0 kubenswrapper[3985]: I0313 01:12:07.993037 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf"] Mar 13 01:12:07.995723 master-0 kubenswrapper[3985]: I0313 01:12:07.993614 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:07.998813 master-0 kubenswrapper[3985]: I0313 01:12:07.997230 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t"] Mar 13 01:12:08.004886 master-0 kubenswrapper[3985]: I0313 01:12:08.004837 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 01:12:08.005560 master-0 kubenswrapper[3985]: I0313 01:12:08.005542 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 01:12:08.005953 master-0 kubenswrapper[3985]: I0313 01:12:08.005936 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 01:12:08.010737 master-0 kubenswrapper[3985]: I0313 01:12:08.009500 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 01:12:08.012191 master-0 kubenswrapper[3985]: I0313 01:12:08.011998 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8"] Mar 13 01:12:08.012257 master-0 kubenswrapper[3985]: I0313 01:12:08.012216 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.012574 master-0 kubenswrapper[3985]: I0313 01:12:08.012500 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-trr9r"] Mar 13 01:12:08.012779 master-0 kubenswrapper[3985]: I0313 01:12:08.012751 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.013453 master-0 kubenswrapper[3985]: I0313 01:12:08.013432 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7"] Mar 13 01:12:08.013656 master-0 kubenswrapper[3985]: I0313 01:12:08.013637 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.014016 master-0 kubenswrapper[3985]: I0313 01:12:08.013997 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-p5c8r"] Mar 13 01:12:08.014189 master-0 kubenswrapper[3985]: I0313 01:12:08.014175 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.015414 master-0 kubenswrapper[3985]: I0313 01:12:08.015399 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.018111 master-0 kubenswrapper[3985]: I0313 01:12:08.018073 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wb6qq"] Mar 13 01:12:08.018716 master-0 kubenswrapper[3985]: I0313 01:12:08.018613 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn"] Mar 13 01:12:08.019796 master-0 kubenswrapper[3985]: I0313 01:12:08.019709 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.019796 master-0 kubenswrapper[3985]: I0313 01:12:08.019781 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:08.019958 master-0 kubenswrapper[3985]: I0313 01:12:08.019911 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8"] Mar 13 01:12:08.020262 master-0 kubenswrapper[3985]: I0313 01:12:08.019724 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.020644 master-0 kubenswrapper[3985]: I0313 01:12:08.020603 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:12:08.022166 master-0 kubenswrapper[3985]: I0313 01:12:08.022131 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 01:12:08.023541 master-0 kubenswrapper[3985]: I0313 01:12:08.023486 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h"] Mar 13 01:12:08.024291 master-0 kubenswrapper[3985]: I0313 01:12:08.023925 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.029010 master-0 kubenswrapper[3985]: I0313 01:12:08.028364 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl"] Mar 13 01:12:08.029010 master-0 kubenswrapper[3985]: I0313 01:12:08.028877 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5"] Mar 13 01:12:08.029253 master-0 kubenswrapper[3985]: I0313 01:12:08.029194 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.029311 master-0 kubenswrapper[3985]: I0313 01:12:08.029252 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.031088 master-0 kubenswrapper[3985]: I0313 01:12:08.031056 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7"] Mar 13 01:12:08.031876 master-0 kubenswrapper[3985]: I0313 01:12:08.031725 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq"] Mar 13 01:12:08.032162 master-0 kubenswrapper[3985]: I0313 01:12:08.032131 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg"] Mar 13 01:12:08.033724 master-0 kubenswrapper[3985]: I0313 01:12:08.033681 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.034825 master-0 kubenswrapper[3985]: I0313 01:12:08.034788 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:08.035657 master-0 kubenswrapper[3985]: I0313 01:12:08.035623 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8"] Mar 13 01:12:08.036206 master-0 kubenswrapper[3985]: I0313 01:12:08.036171 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddtwn"] Mar 13 01:12:08.037092 master-0 kubenswrapper[3985]: I0313 01:12:08.036851 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:08.037302 master-0 kubenswrapper[3985]: I0313 01:12:08.037268 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.037831 master-0 kubenswrapper[3985]: I0313 01:12:08.037803 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-bx29h"] Mar 13 01:12:08.045962 master-0 kubenswrapper[3985]: I0313 01:12:08.045913 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.062095 master-0 kubenswrapper[3985]: I0313 01:12:08.062052 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.063550 master-0 kubenswrapper[3985]: I0313 01:12:08.063494 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 01:12:08.064050 master-0 kubenswrapper[3985]: I0313 01:12:08.064007 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 01:12:08.064685 master-0 kubenswrapper[3985]: I0313 01:12:08.064669 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 01:12:08.064978 master-0 kubenswrapper[3985]: I0313 01:12:08.064961 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 01:12:08.065885 master-0 kubenswrapper[3985]: I0313 01:12:08.065648 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h"] Mar 13 01:12:08.067795 master-0 kubenswrapper[3985]: I0313 01:12:08.066295 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg"] Mar 13 01:12:08.067795 master-0 kubenswrapper[3985]: I0313 01:12:08.066592 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn"] Mar 13 01:12:08.067795 master-0 kubenswrapper[3985]: I0313 01:12:08.066682 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:08.067795 master-0 kubenswrapper[3985]: I0313 01:12:08.066778 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:08.067795 master-0 kubenswrapper[3985]: I0313 01:12:08.067491 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.069054 master-0 kubenswrapper[3985]: I0313 01:12:08.069019 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.069254 master-0 kubenswrapper[3985]: I0313 01:12:08.069224 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 01:12:08.069616 master-0 kubenswrapper[3985]: I0313 01:12:08.069582 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 01:12:08.069911 master-0 kubenswrapper[3985]: I0313 01:12:08.069876 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 01:12:08.070017 master-0 kubenswrapper[3985]: I0313 01:12:08.070000 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 01:12:08.070192 master-0 kubenswrapper[3985]: I0313 01:12:08.070143 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 01:12:08.070429 master-0 kubenswrapper[3985]: I0313 01:12:08.070410 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 01:12:08.070607 master-0 kubenswrapper[3985]: I0313 01:12:08.070574 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.070842 master-0 kubenswrapper[3985]: I0313 01:12:08.070811 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 01:12:08.070985 master-0 kubenswrapper[3985]: I0313 01:12:08.070966 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.071071 master-0 kubenswrapper[3985]: I0313 01:12:08.071042 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 01:12:08.071264 master-0 kubenswrapper[3985]: I0313 01:12:08.071231 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 01:12:08.071385 master-0 kubenswrapper[3985]: I0313 01:12:08.071356 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 01:12:08.072455 master-0 kubenswrapper[3985]: I0313 01:12:08.071474 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 01:12:08.072455 master-0 kubenswrapper[3985]: I0313 01:12:08.071704 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.072455 master-0 kubenswrapper[3985]: I0313 01:12:08.071833 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 01:12:08.072455 master-0 kubenswrapper[3985]: I0313 01:12:08.071942 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 01:12:08.072455 master-0 kubenswrapper[3985]: I0313 01:12:08.072057 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 01:12:08.072710 master-0 kubenswrapper[3985]: I0313 01:12:08.070493 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 01:12:08.072710 master-0 kubenswrapper[3985]: I0313 01:12:08.072656 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 01:12:08.072792 master-0 kubenswrapper[3985]: I0313 01:12:08.070996 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 01:12:08.072861 master-0 kubenswrapper[3985]: I0313 01:12:08.072822 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 01:12:08.072917 master-0 kubenswrapper[3985]: I0313 01:12:08.072856 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 01:12:08.072917 master-0 kubenswrapper[3985]: I0313 01:12:08.072910 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 01:12:08.073007 master-0 kubenswrapper[3985]: I0313 01:12:08.072904 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 01:12:08.073065 master-0 kubenswrapper[3985]: I0313 01:12:08.073018 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 01:12:08.073117 master-0 kubenswrapper[3985]: I0313 01:12:08.073074 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 01:12:08.073170 master-0 kubenswrapper[3985]: I0313 01:12:08.073025 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 01:12:08.073170 master-0 kubenswrapper[3985]: I0313 01:12:08.073135 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 01:12:08.073261 master-0 kubenswrapper[3985]: I0313 01:12:08.073188 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 01:12:08.073261 master-0 kubenswrapper[3985]: I0313 01:12:08.073205 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 01:12:08.073354 master-0 kubenswrapper[3985]: I0313 01:12:08.073146 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:12:08.073354 master-0 kubenswrapper[3985]: I0313 01:12:08.073299 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 01:12:08.073354 master-0 kubenswrapper[3985]: I0313 01:12:08.073341 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 01:12:08.073354 master-0 kubenswrapper[3985]: I0313 01:12:08.073356 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 01:12:08.073563 master-0 kubenswrapper[3985]: I0313 01:12:08.073394 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 01:12:08.073563 master-0 kubenswrapper[3985]: I0313 01:12:08.073531 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:12:08.073563 master-0 kubenswrapper[3985]: I0313 01:12:08.073542 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 01:12:08.073563 master-0 kubenswrapper[3985]: I0313 01:12:08.073073 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.073715 master-0 kubenswrapper[3985]: I0313 01:12:08.073604 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 01:12:08.073715 master-0 kubenswrapper[3985]: I0313 01:12:08.073650 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:12:08.073715 master-0 kubenswrapper[3985]: I0313 01:12:08.073684 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 01:12:08.073831 master-0 kubenswrapper[3985]: I0313 01:12:08.073743 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 01:12:08.073831 master-0 kubenswrapper[3985]: I0313 01:12:08.073785 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 01:12:08.073831 master-0 kubenswrapper[3985]: I0313 01:12:08.073821 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 01:12:08.073939 master-0 kubenswrapper[3985]: I0313 01:12:08.073755 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.073939 master-0 kubenswrapper[3985]: I0313 01:12:08.073877 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 01:12:08.074018 master-0 kubenswrapper[3985]: I0313 01:12:08.073991 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 01:12:08.074018 master-0 kubenswrapper[3985]: I0313 01:12:08.074002 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.074107 master-0 kubenswrapper[3985]: I0313 01:12:08.073999 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.074310 master-0 kubenswrapper[3985]: I0313 01:12:08.074271 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 01:12:08.074566 master-0 kubenswrapper[3985]: I0313 01:12:08.074539 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.075934 master-0 kubenswrapper[3985]: I0313 01:12:08.075829 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf"] Mar 13 01:12:08.076817 master-0 kubenswrapper[3985]: I0313 01:12:08.076527 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.076817 master-0 kubenswrapper[3985]: I0313 01:12:08.076586 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.076817 master-0 kubenswrapper[3985]: I0313 01:12:08.076629 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.076817 master-0 kubenswrapper[3985]: I0313 01:12:08.076724 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.077161 master-0 kubenswrapper[3985]: I0313 01:12:08.076830 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.077161 master-0 kubenswrapper[3985]: I0313 01:12:08.076923 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.077161 master-0 kubenswrapper[3985]: I0313 01:12:08.076976 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2jgj\" (UniqueName: \"kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj\") pod \"csi-snapshot-controller-operator-5685fbc7d-478l8\" (UID: \"d163333f-fda5-4067-ad7c-6f646ae411c8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:12:08.077303 master-0 kubenswrapper[3985]: I0313 01:12:08.077205 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhk76\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.077478 master-0 kubenswrapper[3985]: I0313 01:12:08.077430 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkzq\" (UniqueName: \"kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.077564 master-0 kubenswrapper[3985]: I0313 01:12:08.077502 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:08.077606 master-0 kubenswrapper[3985]: I0313 01:12:08.077572 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.077644 master-0 kubenswrapper[3985]: I0313 01:12:08.077630 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.077717 master-0 kubenswrapper[3985]: I0313 01:12:08.077689 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fz9qf\" (UniqueName: \"kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.077764 master-0 kubenswrapper[3985]: I0313 01:12:08.077726 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.077764 master-0 kubenswrapper[3985]: I0313 01:12:08.077751 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.077822 master-0 kubenswrapper[3985]: I0313 01:12:08.077789 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.077822 master-0 kubenswrapper[3985]: I0313 01:12:08.077816 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.077890 master-0 kubenswrapper[3985]: I0313 01:12:08.077854 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.077952 master-0 kubenswrapper[3985]: I0313 01:12:08.077928 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.077991 master-0 kubenswrapper[3985]: I0313 01:12:08.077962 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.078029 master-0 kubenswrapper[3985]: I0313 01:12:08.078004 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.078061 master-0 kubenswrapper[3985]: I0313 01:12:08.078043 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.078089 master-0 kubenswrapper[3985]: I0313 01:12:08.078069 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.078175 master-0 kubenswrapper[3985]: I0313 01:12:08.078114 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpdjh\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.078175 master-0 kubenswrapper[3985]: I0313 01:12:08.078171 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.078274 master-0 kubenswrapper[3985]: I0313 01:12:08.078200 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.078274 master-0 kubenswrapper[3985]: I0313 01:12:08.078232 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.078338 master-0 kubenswrapper[3985]: I0313 01:12:08.078279 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.078338 master-0 kubenswrapper[3985]: I0313 01:12:08.078321 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:08.078396 master-0 kubenswrapper[3985]: I0313 01:12:08.078336 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 01:12:08.078396 master-0 kubenswrapper[3985]: I0313 01:12:08.078352 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.078450 master-0 kubenswrapper[3985]: I0313 01:12:08.078394 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.078450 master-0 kubenswrapper[3985]: I0313 01:12:08.078423 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.078501 master-0 kubenswrapper[3985]: I0313 01:12:08.078473 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.078575 master-0 kubenswrapper[3985]: I0313 01:12:08.078501 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 01:12:08.078679 master-0 kubenswrapper[3985]: I0313 01:12:08.078648 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 01:12:08.078820 master-0 kubenswrapper[3985]: I0313 01:12:08.078805 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:12:08.079090 master-0 kubenswrapper[3985]: I0313 01:12:08.078500 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jc8xs\" (UniqueName: \"kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.079193 master-0 kubenswrapper[3985]: I0313 01:12:08.079165 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b4qsk\" (UniqueName: \"kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.079247 master-0 kubenswrapper[3985]: I0313 01:12:08.079218 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.079285 master-0 kubenswrapper[3985]: I0313 01:12:08.079249 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 01:12:08.079285 master-0 kubenswrapper[3985]: I0313 01:12:08.079268 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.079346 master-0 kubenswrapper[3985]: I0313 01:12:08.079309 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smhrl\" (UniqueName: \"kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.079377 master-0 kubenswrapper[3985]: I0313 01:12:08.079346 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.079407 master-0 kubenswrapper[3985]: I0313 01:12:08.079374 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.079407 master-0 kubenswrapper[3985]: I0313 01:12:08.079400 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.079538 master-0 kubenswrapper[3985]: I0313 01:12:08.079423 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.079538 master-0 kubenswrapper[3985]: I0313 01:12:08.079450 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8l9r\" (UniqueName: \"kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.079538 master-0 kubenswrapper[3985]: I0313 01:12:08.079480 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5gc8\" (UniqueName: \"kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:08.080003 master-0 kubenswrapper[3985]: I0313 01:12:08.079504 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.080091 master-0 kubenswrapper[3985]: I0313 01:12:08.080038 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.080091 master-0 kubenswrapper[3985]: I0313 01:12:08.080083 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztdc9\" (UniqueName: \"kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.080200 master-0 kubenswrapper[3985]: I0313 01:12:08.080035 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t"] Mar 13 01:12:08.080200 master-0 kubenswrapper[3985]: I0313 01:12:08.080118 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.080332 master-0 kubenswrapper[3985]: I0313 01:12:08.080210 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:08.080332 master-0 kubenswrapper[3985]: I0313 01:12:08.080235 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4b8jr\" (UniqueName: \"kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:08.080332 master-0 kubenswrapper[3985]: I0313 01:12:08.080263 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.080332 master-0 kubenswrapper[3985]: I0313 01:12:08.080298 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.080332 master-0 kubenswrapper[3985]: I0313 01:12:08.080318 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.080586 master-0 kubenswrapper[3985]: I0313 01:12:08.080344 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.080586 master-0 kubenswrapper[3985]: I0313 01:12:08.080373 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rg4g\" (UniqueName: \"kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.080586 master-0 kubenswrapper[3985]: I0313 01:12:08.080400 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.080586 master-0 kubenswrapper[3985]: I0313 01:12:08.080546 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:08.082671 master-0 kubenswrapper[3985]: I0313 01:12:08.082655 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 01:12:08.084168 master-0 kubenswrapper[3985]: I0313 01:12:08.084138 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 01:12:08.090745 master-0 kubenswrapper[3985]: I0313 01:12:08.090711 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8"] Mar 13 01:12:08.092081 master-0 kubenswrapper[3985]: I0313 01:12:08.092030 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq"] Mar 13 01:12:08.092740 master-0 kubenswrapper[3985]: I0313 01:12:08.092703 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 01:12:08.093476 master-0 kubenswrapper[3985]: I0313 01:12:08.093406 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 01:12:08.096229 master-0 kubenswrapper[3985]: I0313 01:12:08.096192 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 01:12:08.116630 master-0 kubenswrapper[3985]: I0313 01:12:08.111035 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-p5c8r"] Mar 13 01:12:08.121645 master-0 kubenswrapper[3985]: I0313 01:12:08.119259 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-trr9r"] Mar 13 01:12:08.128251 master-0 kubenswrapper[3985]: I0313 01:12:08.126956 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8"] Mar 13 01:12:08.128251 master-0 kubenswrapper[3985]: I0313 01:12:08.127010 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl"] Mar 13 01:12:08.128251 master-0 kubenswrapper[3985]: I0313 01:12:08.127022 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5"] Mar 13 01:12:08.130926 master-0 kubenswrapper[3985]: I0313 01:12:08.130893 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn"] Mar 13 01:12:08.131124 master-0 kubenswrapper[3985]: I0313 01:12:08.131103 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn"] Mar 13 01:12:08.133491 master-0 kubenswrapper[3985]: I0313 01:12:08.133465 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddtwn"] Mar 13 01:12:08.133491 master-0 kubenswrapper[3985]: I0313 01:12:08.133488 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7"] Mar 13 01:12:08.142533 master-0 kubenswrapper[3985]: I0313 01:12:08.136313 3985 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-operator/iptables-alerter-mkkgg"] Mar 13 01:12:08.142533 master-0 kubenswrapper[3985]: I0313 01:12:08.137106 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg"] Mar 13 01:12:08.142533 master-0 kubenswrapper[3985]: I0313 01:12:08.137241 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.142533 master-0 kubenswrapper[3985]: I0313 01:12:08.142132 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 01:12:08.151363 master-0 kubenswrapper[3985]: I0313 01:12:08.149036 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wb6qq"] Mar 13 01:12:08.166535 master-0 kubenswrapper[3985]: I0313 01:12:08.161816 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7"] Mar 13 01:12:08.166535 master-0 kubenswrapper[3985]: I0313 01:12:08.162557 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg"] Mar 13 01:12:08.166535 master-0 kubenswrapper[3985]: I0313 01:12:08.164545 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8"] Mar 13 01:12:08.166535 master-0 kubenswrapper[3985]: I0313 01:12:08.164570 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-bx29h"] Mar 13 01:12:08.177813 master-0 kubenswrapper[3985]: I0313 01:12:08.176701 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:08.180334 master-0 kubenswrapper[3985]: I0313 01:12:08.180290 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.180468 3985 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.180626 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h"] Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181418 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181445 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181469 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181490 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpdjh\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181525 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181542 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181566 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181585 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181602 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181622 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181639 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181657 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181675 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.183531 master-0 kubenswrapper[3985]: I0313 01:12:08.181725 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8xs\" (UniqueName: \"kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181757 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4qsk\" (UniqueName: \"kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181790 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181806 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181826 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smhrl\" (UniqueName: \"kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181844 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181861 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181879 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181902 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181921 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8l9r\" (UniqueName: \"kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181947 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bzs5\" (UniqueName: \"kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181967 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5gc8\" (UniqueName: \"kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.181987 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.182005 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.182022 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.183944 master-0 kubenswrapper[3985]: I0313 01:12:08.182041 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: I0313 01:12:08.182059 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztdc9\" (UniqueName: \"kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: I0313 01:12:08.182078 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: I0313 01:12:08.182098 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b8jr\" (UniqueName: \"kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: I0313 01:12:08.182115 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: I0313 01:12:08.182133 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: E0313 01:12:08.183353 3985 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: E0313 01:12:08.183399 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.683384148 +0000 UTC m=+114.560064362 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: E0313 01:12:08.183587 3985 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: E0313 01:12:08.183615 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.683606123 +0000 UTC m=+114.560286337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: E0313 01:12:08.183651 3985 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: E0313 01:12:08.183670 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.683664254 +0000 UTC m=+114.560344468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:08.184356 master-0 kubenswrapper[3985]: I0313 01:12:08.183691 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: I0313 01:12:08.186083 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: I0313 01:12:08.186903 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: E0313 01:12:08.187553 3985 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: E0313 01:12:08.187590 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.687578616 +0000 UTC m=+114.564258820 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: I0313 01:12:08.188652 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: I0313 01:12:08.188941 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: I0313 01:12:08.181745 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h"] Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: I0313 01:12:08.190519 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.194532 master-0 kubenswrapper[3985]: I0313 01:12:08.193607 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.194947 master-0 kubenswrapper[3985]: I0313 01:12:08.194740 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195495 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195785 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195835 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195868 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rrvhw\" (UniqueName: \"kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195891 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195912 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195965 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.195994 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rg4g\" (UniqueName: \"kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.196019 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.196038 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: E0313 01:12:08.196283 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: E0313 01:12:08.196322 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.69631022 +0000 UTC m=+114.572990434 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.196570 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.196620 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:08.203558 master-0 kubenswrapper[3985]: I0313 01:12:08.196651 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzxv5\" (UniqueName: \"kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196676 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196692 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196714 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196732 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196750 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196769 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196787 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196806 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jgj\" (UniqueName: \"kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj\") pod \"csi-snapshot-controller-operator-5685fbc7d-478l8\" (UID: \"d163333f-fda5-4067-ad7c-6f646ae411c8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196822 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196840 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkzq\" (UniqueName: \"kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196857 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196877 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhk76\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196893 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196911 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:08.204099 master-0 kubenswrapper[3985]: I0313 01:12:08.196945 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.196969 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9qf\" (UniqueName: \"kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.196990 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197010 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197037 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197053 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197069 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197084 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqfj5\" (UniqueName: \"kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197104 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98t5h\" (UniqueName: \"kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197123 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197139 3985 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197157 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197220 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197292 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: E0313 01:12:08.197583 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197932 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.204608 master-0 kubenswrapper[3985]: I0313 01:12:08.197998 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: E0313 01:12:08.198042 3985 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: E0313 01:12:08.198081 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: E0313 01:12:08.198107 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.698086137 +0000 UTC m=+114.574766351 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: E0313 01:12:08.198252 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.69824428 +0000 UTC m=+114.574924494 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: E0313 01:12:08.198653 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.698642819 +0000 UTC m=+114.575323033 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: I0313 01:12:08.199447 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: I0313 01:12:08.200467 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: I0313 01:12:08.201538 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: I0313 01:12:08.201742 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: I0313 01:12:08.201751 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.204986 master-0 kubenswrapper[3985]: I0313 01:12:08.203887 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.211534 master-0 kubenswrapper[3985]: I0313 01:12:08.205824 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.211534 master-0 kubenswrapper[3985]: I0313 01:12:08.205896 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.211534 master-0 kubenswrapper[3985]: I0313 01:12:08.207324 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.211534 master-0 kubenswrapper[3985]: I0313 01:12:08.208265 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.228533 master-0 kubenswrapper[3985]: I0313 01:12:08.215820 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.228533 master-0 kubenswrapper[3985]: I0313 01:12:08.222148 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g"] Mar 13 01:12:08.228533 master-0 kubenswrapper[3985]: I0313 01:12:08.224389 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5gc8\" (UniqueName: \"kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:08.228533 master-0 kubenswrapper[3985]: I0313 01:12:08.225672 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.232790 master-0 kubenswrapper[3985]: I0313 01:12:08.231111 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhk76\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.232790 master-0 kubenswrapper[3985]: I0313 01:12:08.231779 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.237582 master-0 kubenswrapper[3985]: I0313 01:12:08.236342 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9qf\" (UniqueName: \"kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.245531 master-0 kubenswrapper[3985]: I0313 01:12:08.241306 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.245531 master-0 kubenswrapper[3985]: I0313 01:12:08.241961 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rg4g\" (UniqueName: \"kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.245531 master-0 kubenswrapper[3985]: I0313 01:12:08.243953 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztdc9\" (UniqueName: \"kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.250680 master-0 kubenswrapper[3985]: I0313 01:12:08.250354 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8xs\" (UniqueName: \"kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.251466 master-0 kubenswrapper[3985]: I0313 01:12:08.251198 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:08.257165 master-0 kubenswrapper[3985]: I0313 01:12:08.256493 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.257165 master-0 kubenswrapper[3985]: I0313 01:12:08.256547 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jgj\" (UniqueName: \"kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj\") pod \"csi-snapshot-controller-operator-5685fbc7d-478l8\" (UID: \"d163333f-fda5-4067-ad7c-6f646ae411c8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:12:08.260318 master-0 kubenswrapper[3985]: I0313 01:12:08.259223 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.262755 master-0 kubenswrapper[3985]: I0313 01:12:08.262406 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8l9r\" (UniqueName: \"kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.263026 master-0 kubenswrapper[3985]: I0313 01:12:08.263001 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkzq\" (UniqueName: \"kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.263210 master-0 kubenswrapper[3985]: I0313 01:12:08.263187 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b8jr\" (UniqueName: \"kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:08.263691 master-0 kubenswrapper[3985]: I0313 01:12:08.263523 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smhrl\" (UniqueName: \"kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.264341 master-0 kubenswrapper[3985]: I0313 01:12:08.264289 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.264872 master-0 kubenswrapper[3985]: I0313 01:12:08.264851 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.265267 master-0 kubenswrapper[3985]: I0313 01:12:08.265238 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:08.267994 master-0 kubenswrapper[3985]: I0313 01:12:08.267960 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4qsk\" (UniqueName: \"kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.268337 master-0 kubenswrapper[3985]: I0313 01:12:08.268286 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpdjh\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.273284 master-0 kubenswrapper[3985]: I0313 01:12:08.273246 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.276246 master-0 kubenswrapper[3985]: I0313 01:12:08.276128 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:08.294142 master-0 kubenswrapper[3985]: I0313 01:12:08.294080 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.300123 master-0 kubenswrapper[3985]: I0313 01:12:08.300075 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzs5\" (UniqueName: \"kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:08.300199 master-0 kubenswrapper[3985]: I0313 01:12:08.300128 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.300199 master-0 kubenswrapper[3985]: I0313 01:12:08.300153 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.300251 master-0 kubenswrapper[3985]: I0313 01:12:08.300219 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvhw\" (UniqueName: \"kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.300298 master-0 kubenswrapper[3985]: I0313 01:12:08.300267 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.300347 master-0 kubenswrapper[3985]: I0313 01:12:08.300311 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:08.300347 master-0 kubenswrapper[3985]: I0313 01:12:08.300337 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzxv5\" (UniqueName: \"kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.300420 master-0 kubenswrapper[3985]: I0313 01:12:08.300363 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.300420 master-0 kubenswrapper[3985]: I0313 01:12:08.300386 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.300544 master-0 kubenswrapper[3985]: I0313 01:12:08.300463 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:08.300582 master-0 kubenswrapper[3985]: I0313 01:12:08.300559 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqfj5\" (UniqueName: \"kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.300611 master-0 kubenswrapper[3985]: I0313 01:12:08.300588 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98t5h\" (UniqueName: \"kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:08.300640 master-0 kubenswrapper[3985]: I0313 01:12:08.300609 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.301531 master-0 kubenswrapper[3985]: I0313 01:12:08.301470 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.301662 master-0 kubenswrapper[3985]: E0313 01:12:08.301627 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:08.301767 master-0 kubenswrapper[3985]: E0313 01:12:08.301684 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.801667905 +0000 UTC m=+114.678348119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:08.303061 master-0 kubenswrapper[3985]: E0313 01:12:08.302319 3985 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:08.303061 master-0 kubenswrapper[3985]: E0313 01:12:08.302350 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.802341699 +0000 UTC m=+114.679021913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:08.303061 master-0 kubenswrapper[3985]: I0313 01:12:08.302555 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.304756 master-0 kubenswrapper[3985]: I0313 01:12:08.304082 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.304756 master-0 kubenswrapper[3985]: E0313 01:12:08.304310 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:08.304756 master-0 kubenswrapper[3985]: E0313 01:12:08.304374 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:08.804352411 +0000 UTC m=+114.681032835 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:08.308275 master-0 kubenswrapper[3985]: I0313 01:12:08.308229 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.319379 master-0 kubenswrapper[3985]: I0313 01:12:08.317703 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.320538 master-0 kubenswrapper[3985]: I0313 01:12:08.320468 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzxv5\" (UniqueName: \"kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.325564 master-0 kubenswrapper[3985]: I0313 01:12:08.325472 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:08.328763 master-0 kubenswrapper[3985]: I0313 01:12:08.328727 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98t5h\" (UniqueName: \"kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:08.332360 master-0 kubenswrapper[3985]: I0313 01:12:08.332322 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzs5\" (UniqueName: \"kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:08.336569 master-0 kubenswrapper[3985]: I0313 01:12:08.336254 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:08.339961 master-0 kubenswrapper[3985]: I0313 01:12:08.339930 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqfj5\" (UniqueName: \"kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.354808 master-0 kubenswrapper[3985]: I0313 01:12:08.354772 3985 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvhw\" (UniqueName: \"kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.383145 master-0 kubenswrapper[3985]: I0313 01:12:08.382870 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:08.389346 master-0 kubenswrapper[3985]: I0313 01:12:08.389307 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:08.392772 master-0 kubenswrapper[3985]: I0313 01:12:08.392571 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:08.406613 master-0 kubenswrapper[3985]: I0313 01:12:08.400873 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:08.417970 master-0 kubenswrapper[3985]: I0313 01:12:08.414718 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:08.441037 master-0 kubenswrapper[3985]: I0313 01:12:08.440977 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:08.460715 master-0 kubenswrapper[3985]: I0313 01:12:08.460664 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:08.475923 master-0 kubenswrapper[3985]: I0313 01:12:08.475417 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:08.490693 master-0 kubenswrapper[3985]: I0313 01:12:08.490643 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:12:08.649457 master-0 kubenswrapper[3985]: I0313 01:12:08.648146 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8"] Mar 13 01:12:08.649457 master-0 kubenswrapper[3985]: I0313 01:12:08.649090 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5"] Mar 13 01:12:08.710176 master-0 kubenswrapper[3985]: I0313 01:12:08.709965 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:08.710176 master-0 kubenswrapper[3985]: I0313 01:12:08.710011 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:08.710176 master-0 kubenswrapper[3985]: I0313 01:12:08.710038 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.710176 master-0 kubenswrapper[3985]: I0313 01:12:08.710092 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:08.710176 master-0 kubenswrapper[3985]: I0313 01:12:08.710121 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:08.710176 master-0 kubenswrapper[3985]: I0313 01:12:08.710158 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:08.710176 master-0 kubenswrapper[3985]: I0313 01:12:08.710184 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:08.710405 master-0 kubenswrapper[3985]: I0313 01:12:08.710213 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:08.710434 master-0 kubenswrapper[3985]: E0313 01:12:08.710406 3985 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:08.710766 master-0 kubenswrapper[3985]: E0313 01:12:08.710491 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.71047104 +0000 UTC m=+115.587151254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:08.713518 master-0 kubenswrapper[3985]: E0313 01:12:08.713235 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:08.713518 master-0 kubenswrapper[3985]: E0313 01:12:08.713330 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.713304419 +0000 UTC m=+115.589984633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:08.713786 master-0 kubenswrapper[3985]: E0313 01:12:08.713725 3985 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:08.713786 master-0 kubenswrapper[3985]: E0313 01:12:08.713765 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.71375589 +0000 UTC m=+115.590436104 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:08.713870 master-0 kubenswrapper[3985]: E0313 01:12:08.713839 3985 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:08.713915 master-0 kubenswrapper[3985]: E0313 01:12:08.713883 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.713871382 +0000 UTC m=+115.590551596 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:08.713949 master-0 kubenswrapper[3985]: E0313 01:12:08.713937 3985 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:08.713977 master-0 kubenswrapper[3985]: E0313 01:12:08.713961 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.713954844 +0000 UTC m=+115.590635058 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:08.714053 master-0 kubenswrapper[3985]: E0313 01:12:08.714004 3985 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:08.714053 master-0 kubenswrapper[3985]: E0313 01:12:08.714030 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.714025045 +0000 UTC m=+115.590705259 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:08.714128 master-0 kubenswrapper[3985]: E0313 01:12:08.714067 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:08.714128 master-0 kubenswrapper[3985]: E0313 01:12:08.714092 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.714084586 +0000 UTC m=+115.590764800 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:08.714128 master-0 kubenswrapper[3985]: E0313 01:12:08.714126 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:08.714226 master-0 kubenswrapper[3985]: E0313 01:12:08.714144 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.714138947 +0000 UTC m=+115.590819161 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:08.759321 master-0 kubenswrapper[3985]: I0313 01:12:08.759245 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t"] Mar 13 01:12:08.771464 master-0 kubenswrapper[3985]: I0313 01:12:08.771097 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg"] Mar 13 01:12:08.805604 master-0 kubenswrapper[3985]: W0313 01:12:08.805286 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod74efa52b_fd97_418a_9a44_914442633f74.slice/crio-37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c WatchSource:0}: Error finding container 37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c: Status 404 returned error can't find the container with id 37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c Mar 13 01:12:08.811534 master-0 kubenswrapper[3985]: I0313 01:12:08.811489 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:08.811632 master-0 kubenswrapper[3985]: I0313 01:12:08.811616 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:08.811716 master-0 kubenswrapper[3985]: I0313 01:12:08.811700 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:08.812056 master-0 kubenswrapper[3985]: E0313 01:12:08.811957 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:08.812056 master-0 kubenswrapper[3985]: E0313 01:12:08.812048 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.812021445 +0000 UTC m=+115.688701659 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:08.812612 master-0 kubenswrapper[3985]: E0313 01:12:08.812595 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:08.812903 master-0 kubenswrapper[3985]: E0313 01:12:08.812878 3985 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:08.812998 master-0 kubenswrapper[3985]: E0313 01:12:08.812971 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.812946954 +0000 UTC m=+115.689627158 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:08.813090 master-0 kubenswrapper[3985]: E0313 01:12:08.813076 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:09.813051897 +0000 UTC m=+115.689732111 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:08.840167 master-0 kubenswrapper[3985]: I0313 01:12:08.840128 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h"] Mar 13 01:12:08.948455 master-0 kubenswrapper[3985]: I0313 01:12:08.948242 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn"] Mar 13 01:12:08.955014 master-0 kubenswrapper[3985]: I0313 01:12:08.954870 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-64488f9d78-trr9r"] Mar 13 01:12:08.957813 master-0 kubenswrapper[3985]: W0313 01:12:08.957740 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod250a32b4_cc8d_43fa_9dd1_0a8d85a2739a.slice/crio-da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb WatchSource:0}: Error finding container da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb: Status 404 returned error can't find the container with id da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb Mar 13 01:12:08.964587 master-0 kubenswrapper[3985]: W0313 01:12:08.964538 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6fd82994_f4d4_49e9_8742_07e206322e76.slice/crio-6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52 WatchSource:0}: Error finding container 6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52: Status 404 returned error can't find the container with id 6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52 Mar 13 01:12:08.999654 master-0 kubenswrapper[3985]: I0313 01:12:08.999592 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerStarted","Data":"3678d76d6368f04d7424fd0ae731dc627699ae26c8d8180a738d9913435c9819"} Mar 13 01:12:09.000981 master-0 kubenswrapper[3985]: I0313 01:12:09.000882 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" event={"ID":"74efa52b-fd97-418a-9a44-914442633f74","Type":"ContainerStarted","Data":"37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c"} Mar 13 01:12:09.001888 master-0 kubenswrapper[3985]: I0313 01:12:09.001853 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" event={"ID":"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea","Type":"ContainerStarted","Data":"19d9989080bb99254df4633b984ed6ac361fb3f67806322eddb375cdee316de2"} Mar 13 01:12:09.004911 master-0 kubenswrapper[3985]: I0313 01:12:09.004859 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerStarted","Data":"6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52"} Mar 13 01:12:09.006946 master-0 kubenswrapper[3985]: I0313 01:12:09.006906 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-mkkgg" event={"ID":"69da0e58-2ae6-4d4b-b125-77e93df3d660","Type":"ContainerStarted","Data":"2098d43302ad0e00931b30fb0473a362fee9e9000b89c27552d72a632e47afbd"} Mar 13 01:12:09.008221 master-0 kubenswrapper[3985]: I0313 01:12:09.008186 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" event={"ID":"fde89b0b-7133-4b97-9e35-51c0382bd366","Type":"ContainerStarted","Data":"aa8d570cc916b085b102875f5c8076691d32fc0570491e0ffdf16bc87e8e94b9"} Mar 13 01:12:09.008221 master-0 kubenswrapper[3985]: I0313 01:12:09.008216 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" event={"ID":"fde89b0b-7133-4b97-9e35-51c0382bd366","Type":"ContainerStarted","Data":"de825527d944f688f2acf2625cf8789a7117e73fdf8ca84b446d4e5ce667dc74"} Mar 13 01:12:09.009322 master-0 kubenswrapper[3985]: I0313 01:12:09.009290 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" event={"ID":"96b67a99-eada-44d7-93eb-cc3ced777fc6","Type":"ContainerStarted","Data":"f26f2fe408a83b7887b45acd945c90cef651bf2e6e61b90316af3ed0a1cd741e"} Mar 13 01:12:09.010486 master-0 kubenswrapper[3985]: I0313 01:12:09.010436 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" event={"ID":"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a","Type":"ContainerStarted","Data":"da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb"} Mar 13 01:12:09.121656 master-0 kubenswrapper[3985]: I0313 01:12:09.120873 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" podStartSLOduration=79.120843988 podStartE2EDuration="1m19.120843988s" podCreationTimestamp="2026-03-13 01:10:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:09.036784801 +0000 UTC m=+114.913465075" watchObservedRunningTime="2026-03-13 01:12:09.120843988 +0000 UTC m=+114.997524212" Mar 13 01:12:09.122710 master-0 kubenswrapper[3985]: I0313 01:12:09.122555 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn"] Mar 13 01:12:09.123496 master-0 kubenswrapper[3985]: I0313 01:12:09.123451 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8"] Mar 13 01:12:09.127007 master-0 kubenswrapper[3985]: I0313 01:12:09.126914 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7"] Mar 13 01:12:09.127007 master-0 kubenswrapper[3985]: I0313 01:12:09.127017 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8"] Mar 13 01:12:09.133437 master-0 kubenswrapper[3985]: I0313 01:12:09.127880 3985 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf"] Mar 13 01:12:09.138744 master-0 kubenswrapper[3985]: W0313 01:12:09.138500 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfbfc2caf_126e_41b9_9b31_05f7a45d8536.slice/crio-7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c WatchSource:0}: Error finding container 7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c: Status 404 returned error can't find the container with id 7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c Mar 13 01:12:09.156881 master-0 kubenswrapper[3985]: W0313 01:12:09.156808 3985 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb5757329_8692_4719_b3c7_b5df78110fcf.slice/crio-2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712 WatchSource:0}: Error finding container 2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712: Status 404 returned error can't find the container with id 2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712 Mar 13 01:12:09.180461 master-0 kubenswrapper[3985]: I0313 01:12:09.177372 3985 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:09.180461 master-0 kubenswrapper[3985]: I0313 01:12:09.179318 3985 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 01:12:09.725989 master-0 kubenswrapper[3985]: I0313 01:12:09.725413 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:09.725989 master-0 kubenswrapper[3985]: E0313 01:12:09.725879 3985 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:09.725989 master-0 kubenswrapper[3985]: I0313 01:12:09.725929 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:09.725989 master-0 kubenswrapper[3985]: I0313 01:12:09.725958 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:09.726322 master-0 kubenswrapper[3985]: I0313 01:12:09.726009 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:09.726322 master-0 kubenswrapper[3985]: E0313 01:12:09.726037 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.726014012 +0000 UTC m=+117.602694226 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:09.726322 master-0 kubenswrapper[3985]: I0313 01:12:09.726135 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:09.726322 master-0 kubenswrapper[3985]: E0313 01:12:09.726245 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:09.726322 master-0 kubenswrapper[3985]: E0313 01:12:09.726267 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:09.726322 master-0 kubenswrapper[3985]: E0313 01:12:09.726301 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.726285368 +0000 UTC m=+117.602965582 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:09.726489 master-0 kubenswrapper[3985]: E0313 01:12:09.726337 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.726317178 +0000 UTC m=+117.602997392 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:09.726489 master-0 kubenswrapper[3985]: E0313 01:12:09.726377 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:09.726489 master-0 kubenswrapper[3985]: E0313 01:12:09.726401 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.726395 +0000 UTC m=+117.603075214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:09.726489 master-0 kubenswrapper[3985]: E0313 01:12:09.726409 3985 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:09.726489 master-0 kubenswrapper[3985]: I0313 01:12:09.726465 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:09.726687 master-0 kubenswrapper[3985]: E0313 01:12:09.726662 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.726628745 +0000 UTC m=+117.603309049 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:09.726985 master-0 kubenswrapper[3985]: E0313 01:12:09.726752 3985 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:09.728252 master-0 kubenswrapper[3985]: I0313 01:12:09.726748 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:09.728308 master-0 kubenswrapper[3985]: E0313 01:12:09.728273 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.728251029 +0000 UTC m=+117.604931243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:09.728308 master-0 kubenswrapper[3985]: E0313 01:12:09.726822 3985 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:09.728375 master-0 kubenswrapper[3985]: E0313 01:12:09.728315 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.72830299 +0000 UTC m=+117.604983204 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:09.728375 master-0 kubenswrapper[3985]: I0313 01:12:09.728310 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:09.728375 master-0 kubenswrapper[3985]: E0313 01:12:09.728364 3985 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:09.728457 master-0 kubenswrapper[3985]: E0313 01:12:09.728389 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.728381781 +0000 UTC m=+117.605061995 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:09.829915 master-0 kubenswrapper[3985]: I0313 01:12:09.829832 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:09.830144 master-0 kubenswrapper[3985]: E0313 01:12:09.830082 3985 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:09.830144 master-0 kubenswrapper[3985]: I0313 01:12:09.830133 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:09.830241 master-0 kubenswrapper[3985]: E0313 01:12:09.830196 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.830168132 +0000 UTC m=+117.706848346 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:09.830414 master-0 kubenswrapper[3985]: E0313 01:12:09.830352 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:09.830504 master-0 kubenswrapper[3985]: E0313 01:12:09.830473 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.830447577 +0000 UTC m=+117.707127801 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:09.830786 master-0 kubenswrapper[3985]: I0313 01:12:09.830736 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:09.830973 master-0 kubenswrapper[3985]: E0313 01:12:09.830906 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:09.831019 master-0 kubenswrapper[3985]: E0313 01:12:09.830992 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:11.830975029 +0000 UTC m=+117.707655243 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:10.018398 master-0 kubenswrapper[3985]: I0313 01:12:10.018352 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" event={"ID":"fbfc2caf-126e-41b9-9b31-05f7a45d8536","Type":"ContainerStarted","Data":"7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c"} Mar 13 01:12:10.022142 master-0 kubenswrapper[3985]: I0313 01:12:10.022082 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" event={"ID":"d163333f-fda5-4067-ad7c-6f646ae411c8","Type":"ContainerStarted","Data":"1ee1fa592b43fd04f438a18672ba5cbe2212eefd748a0d3d95e70d1fbb463e36"} Mar 13 01:12:10.023243 master-0 kubenswrapper[3985]: I0313 01:12:10.023209 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" event={"ID":"b5757329-8692-4719-b3c7-b5df78110fcf","Type":"ContainerStarted","Data":"2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712"} Mar 13 01:12:10.025202 master-0 kubenswrapper[3985]: I0313 01:12:10.025170 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" event={"ID":"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b","Type":"ContainerStarted","Data":"629398d15647c2b03b039e1c1901983e50f62b43495a0b3d1356a29ab7579f04"} Mar 13 01:12:10.027739 master-0 kubenswrapper[3985]: I0313 01:12:10.027714 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" event={"ID":"c6db75e5-efd1-4bfa-9941-0934d7621ba2","Type":"ContainerStarted","Data":"343ebb9e9f7133e28dc8b97a72067095722cd38fc5a1cd6bd72819c24b19f9a4"} Mar 13 01:12:11.776650 master-0 kubenswrapper[3985]: I0313 01:12:11.776566 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:11.776650 master-0 kubenswrapper[3985]: I0313 01:12:11.776656 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.776845 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: I0313 01:12:11.776879 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: I0313 01:12:11.776918 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.776944 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.776922322 +0000 UTC m=+121.653602536 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777118 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: I0313 01:12:11.777153 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777240 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.777212079 +0000 UTC m=+121.653892293 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777315 3985 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777327 3985 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777346 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.777335561 +0000 UTC m=+121.654015765 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777369 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.777355752 +0000 UTC m=+121.654035966 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777394 3985 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: E0313 01:12:11.777423 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.777414363 +0000 UTC m=+121.654094577 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: I0313 01:12:11.777471 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:11.777701 master-0 kubenswrapper[3985]: I0313 01:12:11.777525 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:11.778151 master-0 kubenswrapper[3985]: I0313 01:12:11.777577 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:11.778151 master-0 kubenswrapper[3985]: E0313 01:12:11.777618 3985 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:11.778151 master-0 kubenswrapper[3985]: E0313 01:12:11.777651 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.777644827 +0000 UTC m=+121.654325041 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:11.778151 master-0 kubenswrapper[3985]: E0313 01:12:11.777682 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:11.778151 master-0 kubenswrapper[3985]: E0313 01:12:11.777706 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.777699469 +0000 UTC m=+121.654379683 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:11.778151 master-0 kubenswrapper[3985]: E0313 01:12:11.777748 3985 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:11.778151 master-0 kubenswrapper[3985]: E0313 01:12:11.777770 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.77776338 +0000 UTC m=+121.654443594 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:11.884221 master-0 kubenswrapper[3985]: I0313 01:12:11.884090 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:11.884221 master-0 kubenswrapper[3985]: I0313 01:12:11.884181 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:11.884221 master-0 kubenswrapper[3985]: I0313 01:12:11.884223 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:11.884705 master-0 kubenswrapper[3985]: E0313 01:12:11.884316 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:11.884705 master-0 kubenswrapper[3985]: E0313 01:12:11.884394 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.884374031 +0000 UTC m=+121.761054235 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:11.884705 master-0 kubenswrapper[3985]: E0313 01:12:11.884477 3985 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:11.884705 master-0 kubenswrapper[3985]: E0313 01:12:11.884618 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:11.884705 master-0 kubenswrapper[3985]: E0313 01:12:11.884631 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.884605256 +0000 UTC m=+121.761285680 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:11.884705 master-0 kubenswrapper[3985]: E0313 01:12:11.884657 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:15.884647517 +0000 UTC m=+121.761327981 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:15.841257 master-0 kubenswrapper[3985]: I0313 01:12:15.841136 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:15.841257 master-0 kubenswrapper[3985]: I0313 01:12:15.841241 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.841502 3985 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.841636 3985 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.841687 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.841647442 +0000 UTC m=+129.718327686 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.841734 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.841706464 +0000 UTC m=+129.718386708 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: I0313 01:12:15.841797 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.841895 3985 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.841943 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.841931629 +0000 UTC m=+129.718611843 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: I0313 01:12:15.842049 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.842168 3985 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.842248 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.842220815 +0000 UTC m=+129.718901069 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: I0313 01:12:15.842294 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: I0313 01:12:15.842343 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.842461 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.842462 3985 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: E0313 01:12:15.842504 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.842490821 +0000 UTC m=+129.719171065 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:15.842693 master-0 kubenswrapper[3985]: I0313 01:12:15.842582 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:15.843924 master-0 kubenswrapper[3985]: I0313 01:12:15.842622 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:15.843924 master-0 kubenswrapper[3985]: E0313 01:12:15.842662 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.842640524 +0000 UTC m=+129.719320768 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:15.843924 master-0 kubenswrapper[3985]: E0313 01:12:15.842760 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:15.843924 master-0 kubenswrapper[3985]: E0313 01:12:15.842767 3985 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:15.843924 master-0 kubenswrapper[3985]: E0313 01:12:15.842803 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.842791527 +0000 UTC m=+129.719471771 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:15.843924 master-0 kubenswrapper[3985]: E0313 01:12:15.842831 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.842818567 +0000 UTC m=+129.719498821 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:15.944157 master-0 kubenswrapper[3985]: I0313 01:12:15.944045 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:15.944560 master-0 kubenswrapper[3985]: I0313 01:12:15.944206 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:15.944682 master-0 kubenswrapper[3985]: E0313 01:12:15.944500 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:15.944682 master-0 kubenswrapper[3985]: E0313 01:12:15.944617 3985 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:15.944682 master-0 kubenswrapper[3985]: I0313 01:12:15.944550 3985 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:15.944682 master-0 kubenswrapper[3985]: E0313 01:12:15.944688 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.944647278 +0000 UTC m=+129.821327532 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:15.945075 master-0 kubenswrapper[3985]: E0313 01:12:15.944718 3985 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:15.945075 master-0 kubenswrapper[3985]: E0313 01:12:15.944735 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.94471444 +0000 UTC m=+129.821394904 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:15.945075 master-0 kubenswrapper[3985]: E0313 01:12:15.944810 3985 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.944781631 +0000 UTC m=+129.821461875 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:20.082804 master-0 kubenswrapper[3985]: I0313 01:12:20.081812 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" event={"ID":"fbfc2caf-126e-41b9-9b31-05f7a45d8536","Type":"ContainerStarted","Data":"5436fbc43037209189594bd015e39350294b9b8da6b6096cb145d36bfb03543f"} Mar 13 01:12:20.087206 master-0 kubenswrapper[3985]: I0313 01:12:20.087134 3985 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="b07ddec5ef3c1ac03f780236e9b354e58153c6ffb31f2047f7405a97d9d4d4c1" exitCode=0 Mar 13 01:12:20.087348 master-0 kubenswrapper[3985]: I0313 01:12:20.087203 3985 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerDied","Data":"b07ddec5ef3c1ac03f780236e9b354e58153c6ffb31f2047f7405a97d9d4d4c1"} Mar 13 01:12:20.138856 master-0 kubenswrapper[3985]: I0313 01:12:20.138716 3985 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" podStartSLOduration=81.023041259 podStartE2EDuration="1m31.138681959s" podCreationTimestamp="2026-03-13 01:10:49 +0000 UTC" firstStartedPulling="2026-03-13 01:12:09.14230592 +0000 UTC m=+115.018986144" lastFinishedPulling="2026-03-13 01:12:19.25794659 +0000 UTC m=+125.134626844" observedRunningTime="2026-03-13 01:12:20.104311527 +0000 UTC m=+125.980991771" watchObservedRunningTime="2026-03-13 01:12:20.138681959 +0000 UTC m=+126.015362213" Mar 13 01:12:20.580375 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 01:12:20.613741 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 01:12:20.614031 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 01:12:20.616553 master-0 systemd[1]: kubelet.service: Consumed 11.149s CPU time. Mar 13 01:12:20.659390 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 01:12:20.791732 master-0 kubenswrapper[7599]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:12:20.791732 master-0 kubenswrapper[7599]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 01:12:20.791732 master-0 kubenswrapper[7599]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:12:20.791732 master-0 kubenswrapper[7599]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:12:20.791732 master-0 kubenswrapper[7599]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 01:12:20.793328 master-0 kubenswrapper[7599]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:12:20.793328 master-0 kubenswrapper[7599]: I0313 01:12:20.791884 7599 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797028 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797055 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797061 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797067 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797072 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797078 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797083 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797089 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797095 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797102 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:12:20.797078 master-0 kubenswrapper[7599]: W0313 01:12:20.797108 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797115 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797122 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797127 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797133 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797138 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797143 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797149 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797155 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797161 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797166 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797171 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797178 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797184 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797189 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797195 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797199 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797205 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797210 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:12:20.798438 master-0 kubenswrapper[7599]: W0313 01:12:20.797216 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797223 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797229 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797235 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797240 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797245 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797251 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797256 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797261 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797269 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797275 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797281 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797293 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797298 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797305 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797310 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797315 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797321 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797325 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:12:20.799964 master-0 kubenswrapper[7599]: W0313 01:12:20.797330 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797335 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797340 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797346 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797353 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797360 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797366 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797371 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797376 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797381 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797386 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797391 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797396 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797402 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797407 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797412 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797416 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797426 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797431 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797436 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:12:20.801207 master-0 kubenswrapper[7599]: W0313 01:12:20.797441 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: W0313 01:12:20.797446 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: W0313 01:12:20.797451 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: W0313 01:12:20.797456 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797595 7599 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797610 7599 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797619 7599 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797626 7599 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797634 7599 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797640 7599 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797647 7599 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797655 7599 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797661 7599 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797667 7599 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797673 7599 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797680 7599 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797686 7599 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797692 7599 flags.go:64] FLAG: --cgroup-root="" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797697 7599 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797703 7599 flags.go:64] FLAG: --client-ca-file="" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797709 7599 flags.go:64] FLAG: --cloud-config="" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797714 7599 flags.go:64] FLAG: --cloud-provider="" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797720 7599 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797729 7599 flags.go:64] FLAG: --cluster-domain="" Mar 13 01:12:20.802916 master-0 kubenswrapper[7599]: I0313 01:12:20.797736 7599 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797744 7599 flags.go:64] FLAG: --config-dir="" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797751 7599 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797759 7599 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797774 7599 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797780 7599 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797786 7599 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797793 7599 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797798 7599 flags.go:64] FLAG: --contention-profiling="false" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797804 7599 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797809 7599 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797816 7599 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797821 7599 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797834 7599 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797841 7599 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797846 7599 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797853 7599 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797859 7599 flags.go:64] FLAG: --enable-server="true" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797864 7599 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797872 7599 flags.go:64] FLAG: --event-burst="100" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797878 7599 flags.go:64] FLAG: --event-qps="50" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797885 7599 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797891 7599 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797897 7599 flags.go:64] FLAG: --eviction-hard="" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797904 7599 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 01:12:20.804283 master-0 kubenswrapper[7599]: I0313 01:12:20.797910 7599 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797916 7599 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797922 7599 flags.go:64] FLAG: --eviction-soft="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797927 7599 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797932 7599 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797938 7599 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797944 7599 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797950 7599 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797955 7599 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797960 7599 flags.go:64] FLAG: --feature-gates="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797968 7599 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797975 7599 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797981 7599 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797986 7599 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797992 7599 flags.go:64] FLAG: --healthz-port="10248" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.797998 7599 flags.go:64] FLAG: --help="false" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798003 7599 flags.go:64] FLAG: --hostname-override="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798009 7599 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798014 7599 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798020 7599 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798029 7599 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798035 7599 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798040 7599 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798046 7599 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798052 7599 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 01:12:20.806724 master-0 kubenswrapper[7599]: I0313 01:12:20.798058 7599 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798064 7599 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798070 7599 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798076 7599 flags.go:64] FLAG: --kube-reserved="" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798081 7599 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798087 7599 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798092 7599 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798098 7599 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798104 7599 flags.go:64] FLAG: --lock-file="" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798109 7599 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798115 7599 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798120 7599 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798129 7599 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798134 7599 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798140 7599 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798146 7599 flags.go:64] FLAG: --logging-format="text" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798151 7599 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798157 7599 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798165 7599 flags.go:64] FLAG: --manifest-url="" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798172 7599 flags.go:64] FLAG: --manifest-url-header="" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798180 7599 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798185 7599 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798192 7599 flags.go:64] FLAG: --max-pods="110" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798199 7599 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798205 7599 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 01:12:20.809613 master-0 kubenswrapper[7599]: I0313 01:12:20.798211 7599 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798216 7599 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798225 7599 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798230 7599 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798236 7599 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798248 7599 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798254 7599 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798259 7599 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798265 7599 flags.go:64] FLAG: --pod-cidr="" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798272 7599 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798281 7599 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798287 7599 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798293 7599 flags.go:64] FLAG: --pods-per-core="0" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798298 7599 flags.go:64] FLAG: --port="10250" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798305 7599 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798310 7599 flags.go:64] FLAG: --provider-id="" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798316 7599 flags.go:64] FLAG: --qos-reserved="" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798321 7599 flags.go:64] FLAG: --read-only-port="10255" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798327 7599 flags.go:64] FLAG: --register-node="true" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798332 7599 flags.go:64] FLAG: --register-schedulable="true" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798338 7599 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798347 7599 flags.go:64] FLAG: --registry-burst="10" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798353 7599 flags.go:64] FLAG: --registry-qps="5" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798358 7599 flags.go:64] FLAG: --reserved-cpus="" Mar 13 01:12:20.811078 master-0 kubenswrapper[7599]: I0313 01:12:20.798364 7599 flags.go:64] FLAG: --reserved-memory="" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798371 7599 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798377 7599 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798383 7599 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798389 7599 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798395 7599 flags.go:64] FLAG: --runonce="false" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798400 7599 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798406 7599 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798412 7599 flags.go:64] FLAG: --seccomp-default="false" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798417 7599 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798423 7599 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798431 7599 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798438 7599 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798444 7599 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798450 7599 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798472 7599 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798479 7599 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798484 7599 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798491 7599 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798498 7599 flags.go:64] FLAG: --system-cgroups="" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798523 7599 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798533 7599 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798541 7599 flags.go:64] FLAG: --tls-cert-file="" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798548 7599 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798655 7599 flags.go:64] FLAG: --tls-min-version="" Mar 13 01:12:20.812286 master-0 kubenswrapper[7599]: I0313 01:12:20.798664 7599 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798669 7599 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798675 7599 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798681 7599 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798686 7599 flags.go:64] FLAG: --v="2" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798694 7599 flags.go:64] FLAG: --version="false" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798701 7599 flags.go:64] FLAG: --vmodule="" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798709 7599 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: I0313 01:12:20.798715 7599 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798899 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798908 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798914 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798920 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798925 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798930 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798937 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798944 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798950 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798958 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798964 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798969 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:12:20.814041 master-0 kubenswrapper[7599]: W0313 01:12:20.798974 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.798980 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.798985 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.798990 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.798996 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799001 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799006 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799011 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799017 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799023 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799029 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799034 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799039 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799044 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799049 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799054 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799059 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799064 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799069 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799074 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:12:20.815301 master-0 kubenswrapper[7599]: W0313 01:12:20.799079 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799084 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799089 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799094 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799099 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799104 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799109 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799114 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799119 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799126 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799132 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799138 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799148 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799155 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799161 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799166 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799172 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799177 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799182 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:12:20.816416 master-0 kubenswrapper[7599]: W0313 01:12:20.799187 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799192 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799197 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799202 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799207 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799212 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799217 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799222 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799227 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799233 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799238 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799243 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799248 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799253 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799258 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799263 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799268 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799273 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799277 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799282 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:12:20.817391 master-0 kubenswrapper[7599]: W0313 01:12:20.799288 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:12:20.821110 master-0 kubenswrapper[7599]: I0313 01:12:20.799304 7599 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:12:20.829480 master-0 kubenswrapper[7599]: I0313 01:12:20.829417 7599 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 01:12:20.829480 master-0 kubenswrapper[7599]: I0313 01:12:20.829464 7599 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829696 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829709 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829715 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829721 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829726 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829732 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829737 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829743 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829749 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829755 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829760 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829766 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829774 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829783 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829790 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829796 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829801 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829807 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829812 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:12:20.829919 master-0 kubenswrapper[7599]: W0313 01:12:20.829817 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829822 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829828 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829834 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829839 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829844 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829849 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829854 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829859 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829864 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829871 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829876 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829881 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829886 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829894 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829899 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829904 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829909 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829915 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829921 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:12:20.835256 master-0 kubenswrapper[7599]: W0313 01:12:20.829927 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829933 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829941 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829947 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829952 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829958 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829965 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829970 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829975 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829980 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829988 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.829995 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830000 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830004 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830009 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830014 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830019 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830024 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830029 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830034 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:12:20.836781 master-0 kubenswrapper[7599]: W0313 01:12:20.830040 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830046 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830052 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830061 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830068 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830073 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830080 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830086 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830091 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830096 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830103 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830110 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830116 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: I0313 01:12:20.830126 7599 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:12:20.837868 master-0 kubenswrapper[7599]: W0313 01:12:20.830318 7599 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830330 7599 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830337 7599 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830345 7599 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830351 7599 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830357 7599 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830363 7599 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830368 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830374 7599 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830380 7599 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830386 7599 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830393 7599 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830399 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830405 7599 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830411 7599 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830416 7599 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830422 7599 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830428 7599 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830434 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:12:20.838863 master-0 kubenswrapper[7599]: W0313 01:12:20.830439 7599 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830445 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830450 7599 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830455 7599 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830491 7599 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830496 7599 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830502 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830525 7599 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830531 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830536 7599 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830542 7599 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830547 7599 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830552 7599 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830557 7599 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830562 7599 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830568 7599 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830573 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830578 7599 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830584 7599 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830594 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:12:20.839863 master-0 kubenswrapper[7599]: W0313 01:12:20.830599 7599 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830604 7599 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830609 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830615 7599 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830620 7599 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830625 7599 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830630 7599 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830636 7599 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830641 7599 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830646 7599 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830651 7599 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830658 7599 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830665 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830671 7599 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830677 7599 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830683 7599 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830689 7599 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830695 7599 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830700 7599 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:12:20.840967 master-0 kubenswrapper[7599]: W0313 01:12:20.830706 7599 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830713 7599 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830719 7599 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830725 7599 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830731 7599 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830736 7599 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830743 7599 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830748 7599 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830753 7599 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830758 7599 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830763 7599 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830768 7599 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830774 7599 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: W0313 01:12:20.830778 7599 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: I0313 01:12:20.830788 7599 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:12:20.846006 master-0 kubenswrapper[7599]: I0313 01:12:20.831068 7599 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 01:12:20.846990 master-0 kubenswrapper[7599]: I0313 01:12:20.843497 7599 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 01:12:20.846990 master-0 kubenswrapper[7599]: I0313 01:12:20.843697 7599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 01:12:20.846990 master-0 kubenswrapper[7599]: I0313 01:12:20.844138 7599 server.go:997] "Starting client certificate rotation" Mar 13 01:12:20.846990 master-0 kubenswrapper[7599]: I0313 01:12:20.844152 7599 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 01:12:20.846990 master-0 kubenswrapper[7599]: I0313 01:12:20.845259 7599 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 01:12:20.846990 master-0 kubenswrapper[7599]: I0313 01:12:20.845404 7599 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 01:02:11 +0000 UTC, rotation deadline is 2026-03-13 18:34:55.962893119 +0000 UTC Mar 13 01:12:20.846990 master-0 kubenswrapper[7599]: I0313 01:12:20.845540 7599 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 17h22m35.117358109s for next certificate rotation Mar 13 01:12:20.849334 master-0 kubenswrapper[7599]: I0313 01:12:20.847148 7599 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 01:12:20.860681 master-0 kubenswrapper[7599]: I0313 01:12:20.860600 7599 log.go:25] "Validated CRI v1 runtime API" Mar 13 01:12:20.864278 master-0 kubenswrapper[7599]: I0313 01:12:20.864222 7599 log.go:25] "Validated CRI v1 image API" Mar 13 01:12:20.865884 master-0 kubenswrapper[7599]: I0313 01:12:20.865809 7599 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 01:12:20.871121 master-0 kubenswrapper[7599]: I0313 01:12:20.871047 7599 fs.go:135] Filesystem UUIDs: map[157256f6-add8-4ac1-82d5-8fc6c96a0913:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 13 01:12:20.871751 master-0 kubenswrapper[7599]: I0313 01:12:20.871099 7599 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/19d9989080bb99254df4633b984ed6ac361fb3f67806322eddb375cdee316de2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/19d9989080bb99254df4633b984ed6ac361fb3f67806322eddb375cdee316de2/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ee1fa592b43fd04f438a18672ba5cbe2212eefd748a0d3d95e70d1fbb463e36/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ee1fa592b43fd04f438a18672ba5cbe2212eefd748a0d3d95e70d1fbb463e36/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2098d43302ad0e00931b30fb0473a362fee9e9000b89c27552d72a632e47afbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2098d43302ad0e00931b30fb0473a362fee9e9000b89c27552d72a632e47afbd/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/343ebb9e9f7133e28dc8b97a72067095722cd38fc5a1cd6bd72819c24b19f9a4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/343ebb9e9f7133e28dc8b97a72067095722cd38fc5a1cd6bd72819c24b19f9a4/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3678d76d6368f04d7424fd0ae731dc627699ae26c8d8180a738d9913435c9819/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3678d76d6368f04d7424fd0ae731dc627699ae26c8d8180a738d9913435c9819/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/458ff1fddfd5f3b95a485a4b0cb8e88a31c5825a6f8733cb5141f441c672f2be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/458ff1fddfd5f3b95a485a4b0cb8e88a31c5825a6f8733cb5141f441c672f2be/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/629398d15647c2b03b039e1c1901983e50f62b43495a0b3d1356a29ab7579f04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/629398d15647c2b03b039e1c1901983e50f62b43495a0b3d1356a29ab7579f04/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1/userdata/shm major:0 minor:41 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/96f6e8e91d7109dc966f1dd2cbd1b74212480a19ccee4443647cc163d94cfaba/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/96f6e8e91d7109dc966f1dd2cbd1b74212480a19ccee4443647cc163d94cfaba/userdata/shm major:0 minor:50 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e/userdata/shm major:0 minor:58 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de825527d944f688f2acf2625cf8789a7117e73fdf8ca84b446d4e5ce667dc74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de825527d944f688f2acf2625cf8789a7117e73fdf8ca84b446d4e5ce667dc74/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec17a1f92974fc202f31cbb68ea7af983419d8c972a92fa5e88ff84c017f8e6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec17a1f92974fc202f31cbb68ea7af983419d8c972a92fa5e88ff84c017f8e6d/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f26f2fe408a83b7887b45acd945c90cef651bf2e6e61b90316af3ed0a1cd741e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f26f2fe408a83b7887b45acd945c90cef651bf2e6e61b90316af3ed0a1cd741e/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~projected/kube-api-access-q5lg5:{mountpoint:/var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~projected/kube-api-access-q5lg5 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~projected/kube-api-access-pqfj5:{mountpoint:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~projected/kube-api-access-pqfj5 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~projected/kube-api-access-smhrl:{mountpoint:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~projected/kube-api-access-smhrl major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2d368174-c659-444e-ba28-8fa267c0eda6/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/2d368174-c659-444e-ba28-8fa267c0eda6/volumes/kubernetes.io~projected/kube-api-access major:0 minor:98 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~projected/kube-api-access-4bzs5:{mountpoint:/var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~projected/kube-api-access-4bzs5 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~projected/kube-api-access-jc8xs:{mountpoint:/var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~projected/kube-api-access-jc8xs major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~projected/kube-api-access-n58nf:{mountpoint:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~projected/kube-api-access-n58nf major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~projected/kube-api-access-98t5h:{mountpoint:/var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~projected/kube-api-access-98t5h major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69da0e58-2ae6-4d4b-b125-77e93df3d660/volumes/kubernetes.io~projected/kube-api-access-pzxv5:{mountpoint:/var/lib/kubelet/pods/69da0e58-2ae6-4d4b-b125-77e93df3d660/volumes/kubernetes.io~projected/kube-api-access-pzxv5 major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~projected/kube-api-access-k5gc8:{mountpoint:/var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~projected/kube-api-access-k5gc8 major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~projected/kube-api-access-k8l9r:{mountpoint:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~projected/kube-api-access-k8l9r major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~projected/kube-api-access-8jkzq:{mountpoint:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~projected/kube-api-access-8jkzq major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/kube-api-access-zpdjh:{mountpoint:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/kube-api-access-zpdjh major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~projected/kube-api-access-fz9qf:{mountpoint:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~projected/kube-api-access-fz9qf major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/etcd-client major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~projected/kube-api-access-4b8jr:{mountpoint:/var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~projected/kube-api-access-4b8jr major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~projected/kube-api-access-rrvhw:{mountpoint:/var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~projected/kube-api-access-rrvhw major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~projected/kube-api-access-t6wzz:{mountpoint:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~projected/kube-api-access-t6wzz major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~projected/kube-api-access-b4qsk:{mountpoint:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~projected/kube-api-access-b4qsk major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/kube-api-access-fhk76:{mountpoint:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/kube-api-access-fhk76 major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~projected/kube-api-access-4rg4g:{mountpoint:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~projected/kube-api-access-4rg4g major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~projected/kube-api-access-pj7cp:{mountpoint:/var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~projected/kube-api-access-pj7cp major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~projected/kube-api-access-ztdc9:{mountpoint:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~projected/kube-api-access-ztdc9 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~projected/kube-api-access-txxbg:{mountpoint:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~projected/kube-api-access-txxbg major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d163333f-fda5-4067-ad7c-6f646ae411c8/volumes/kubernetes.io~projected/kube-api-access-v2jgj:{mountpoint:/var/lib/kubelet/pods/d163333f-fda5-4067-ad7c-6f646ae411c8/volumes/kubernetes.io~projected/kube-api-access-v2jgj major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~projected/kube-api-access-5xmqc:{mountpoint:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~projected/kube-api-access-5xmqc major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de46c12a-aa3e-442e-bcc4-365d05f50103/volumes/kubernetes.io~projected/kube-api-access-sjkgv:{mountpoint:/var/lib/kubelet/pods/de46c12a-aa3e-442e-bcc4-365d05f50103/volumes/kubernetes.io~projected/kube-api-access-sjkgv major:0 minor:101 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~projected/kube-api-access major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/volumes/kubernetes.io~projected/kube-api-access-2dlx5:{mountpoint:/var/lib/kubelet/pods/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/volumes/kubernetes.io~projected/kube-api-access-2dlx5 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~projected/kube-api-access-2nbvg:{mountpoint:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~projected/kube-api-access-2nbvg major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~projected/kube-api-access major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/43109ccaebefc6548cfee70b45bd19623b6c3ac3f8d6d6ecc82a09932bc4a9dd/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-110:{mountpoint:/var/lib/containers/storage/overlay/e03029370d21c123b8cd9242353ee51f0b2056ba02b9b0602ae3e9604258e240/merged major:0 minor:110 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/895885a58ff1a4adf2a6e4cb9e0fb01c8a921c27537a5abc59f2f60cbf819c10/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/2987bc90bb2b585659ecf426af4f82d579b3f0803d5d8492bd1d7d37c7bc8b87/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/8fafe0b081f68d169f9afd99488cd14bee8ddf0a709fe8db9c921f1e7f58c664/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/0abb975f7bf2daa3e2a9d1927541c1ad7d29b662a94c813ddb68169284d80cd3/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/6fc1cbc561fe4b92911d03ff123eac7408ff2f5bedea41f0bc5357fec565ff69/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/bf5bd8d4fd28886c648d43de7514c52876847fdb963accee5ed07d5cd4cb4107/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/6d3acb760c4b20d9748fdc3333ff0040f73b487a453566edfd46f07e36253b24/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/fbabe8d8a7c8f385ef3ce61da074874df1f88af26e33844ccb563a22aa890c2d/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-154:{mountpoint:/var/lib/containers/storage/overlay/0d5b69708663e66a9b3cc66ebcbee55eba26627c57e5798fe6b14e488740e709/merged major:0 minor:154 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/42d2b780b420ed44dd4846985307d8cc760a4d46d4226dfa4d0f44ea4852afd9/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-158:{mountpoint:/var/lib/containers/storage/overlay/36135ce64973da7b39a6be0587a60b995e0b3192a5bc36ddb3986a56a0fd5aec/merged major:0 minor:158 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/70d55512a5d0b672370246657814e997fcf93d175d4a524c3ccb8f6300437869/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/f2f1c108bd07dbd71690c3e28c5dc74ddbd1e4a2880c166a69a1bf01c89889e1/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/d17c96ebf95bfd571c8fb1756eea80cd6ba9b0df7245f6951672a67455f18052/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/7d2edde02cc7f4b6bf935eace6543a41fa7744c9ae48aba8e07cf2d9c1ca2eb0/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/68a3cb19fc6979295f3fed3acc6389d459ffe1291c51c7b4f4bb3f988fcd43f4/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/11a8ffcde96a338ac18cee2bc5c119881aedcf3788dd07fccc369ccf48b7f0ab/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/620129c40abdc0746af65c3a4f4fa9668fbf7f05a7f75342fac6d5cdbca04eab/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/b951a512676ab97a2a776eedfce14119387a9d0504d88798974aeee6c8b6ca3a/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/5b554684d419eaf55cdda2ade3052c248dc6fcba4bb1208c94e14666effe2056/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/6a989c40041f293bbfbd96cf1b9920712ffafdaf6fc6d787fd47f1e491d7f557/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/99e3df40c144e27e2b40cc2e982f4769acf8a5a3087eaca635c8742594ce9773/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-281:{mountpoint:/var/lib/containers/storage/overlay/32045b3c392888277bac92685f17f398a021f6e7db9e961eb33a8b4beb2641cd/merged major:0 minor:281 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/a23069968a762dfe9226e0114302dd2e09eea311e96f5f22bfdfb5b6b71cecc9/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/ae1d86132d00d8c198aa7e25f3be046e42bd41f31b5fe4fbe63980df035367d6/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/7f4896967f5bd54424fad276ba8ca08288d9698151834291d04f150cf7eeb094/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/ae13b16536c497bcde65d74b75f4dac6a76280afb57ce16f33c179edb20707f4/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/58f909d34990dafb4f72f74ed02b5c16002c775fdba4af2c9d3e80998269fbb2/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/97245a150e35511567d0450a18eee0f212c518b3ab71bf0c9f1b7340fdcbfa5d/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/768bca547e722d1841ab499bb61903494d16f2d40611c0b022b6555b84e04f6a/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/f9593cc36923c95204786679ca2c2aec6fbdf844972ef9081766704237c891b0/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/037b3e2043c8e92204d0771411e5aaa2799f8f6f7e47a29c6ee19eacb8ec50c7/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/34de7f109a1f2a314403c7c0bd583ffadf1c0b520d336b46a07ad865451c59ea/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/88ce41e42e63a800a070b29c4647525f2ebc1e743e7d9baa5103a074b1ccfbee/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-309:{mountpoint:/var/lib/containers/storage/overlay/f1515ee6a4650eebc2c0d93ea71d0e82c310445f05601a613e9342288db68a57/merged major:0 minor:309 fsType:overlay blockSize:0} overlay_0-311:{mountpoint:/var/lib/containers/storage/overlay/7b68a68e25054f208377e6a94add8ded9a841648e53ce932e3f2339d19bd013c/merged major:0 minor:311 fsType:overlay blockSize:0} overlay_0-313:{mountpoint:/var/lib/containers/storage/overlay/2d468c1845bd7127f36a9b63130ba398b0a2ccbd595a8acc6892ca5e06ac7ab4/merged major:0 minor:313 fsType:overlay blockSize:0} overlay_0-317:{mountpoint:/var/lib/containers/storage/overlay/4c9dc3b31010e8ce2750c9972c439b574e85ffd5689be7d6c1ad7334063715f2/merged major:0 minor:317 fsType:overlay blockSize:0} overlay_0-319:{mountpoint:/var/lib/containers/storage/overlay/70be34269d45fc27bb0de7bb0cdbedee78ce930ca5a74ff3db6b18fcbb278066/merged major:0 minor:319 fsType:overlay blockSize:0} overlay_0-321:{mountpoint:/var/lib/containers/storage/overlay/2cccbb5b931cc304172cea172b592859b4470f0cc0234fcee7110c11eac0ef3a/merged major:0 minor:321 fsType:overlay blockSize:0} overlay_0-325:{mountpoint:/var/lib/containers/storage/overlay/6c162c30b8b94dd949eeb7c064132025f6b660117727bb045572a5d2c7caa97e/merged major:0 minor:325 fsType:overlay blockSize:0} overlay_0-327:{mountpoint:/var/lib/containers/storage/overlay/fa1dd5935cd769418cea09cd2e4e23d5c333940994e00ec674bf3bece5367d35/merged major:0 minor:327 fsType:overlay blockSize:0} overlay_0-329:{mountpoint:/var/lib/containers/storage/overlay/79c9379863717a3f375ef792e89c40503635ee61c502ce764d65515f00524bb4/merged major:0 minor:329 fsType:overlay blockSize:0} overlay_0-339:{mountpoint:/var/lib/containers/storage/overlay/aeb28fc3603baa090d58df42f9bc40173b9164a6e6a98323dc982bcf2a9df5d5/merged major:0 minor:339 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/b44b6a1a05147be82a266ac5337c98bc37d657643f907396a3801f1774204c8c/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/87f59f7cd57d661592abbcdfc61d3f3a7d34c2d76ff5d908bd0024c4417f15c3/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/d2d96a8fb0f707572297eec0481c209473127ec31cb5dc8570edd1696bb520d7/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/f7bf12743c6fea821b6eb8b168fdfd607cadf9a65f6d60d8ce11be2d7859eb7e/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-60:{mountpoint:/var/lib/containers/storage/overlay/503cd8b202d4ec8bd59dead9fefb064dd0fb6c2c425cbbb4f5a8e5abc7e17b9e/merged major:0 minor:60 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/764d136564a43944208113f31ee511953a1f149c39f122482ba3c4299f8e1386/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/f269ae2c7fb9802e10f1ba4616d906e572d5ea8c1ce48dfff5a19935ca374c17/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/66f990d7f52e922bb392460a2743e987c03c07ea5bc785df8e98abeb9abb82ab/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/2fa445e3d8abb540bf4d590132ada4ebfd8bac15ae0a1148235f89cf7502e805/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-78:{mountpoint:/var/lib/containers/storage/overlay/a4dccf36789526c680d9679f960522cbeba07b0b5cd5ddf21c9dee4f8a57eae5/merged major:0 minor:78 fsType:overlay blockSize:0} overlay_0-81:{mountpoint:/var/lib/containers/storage/overlay/b9949d09ce434cd15c6b73c97472b9b7d1b26d6d5b7bf76a0fab165d356b3edc/merged major:0 minor:81 fsType:overlay blockSize:0} overlay_0-89:{mountpoint:/var/lib/containers/storage/overlay/347fda2720e5c91669ccd63812daeb18bd4d55fe2ae37668eb1ea739efe4ec6c/merged major:0 minor:89 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/6d70a8d09419542b77a3e82395cb2b41c588099232e45d8847afd012f6f18f4e/merged major:0 minor:94 fsType:overlay blockSize:0}] Mar 13 01:12:20.900052 master-0 kubenswrapper[7599]: I0313 01:12:20.899300 7599 manager.go:217] Machine: {Timestamp:2026-03-13 01:12:20.898033145 +0000 UTC m=+0.169712569 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:3a0a52883c534d178c5b12dafb817e60 SystemUUID:3a0a5288-3c53-4d17-8c5b-12dafb817e60 BootID:b5890e11-c274-4f10-a685-d6fee1e9f87f Filesystems:[{Device:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:238 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-311 DeviceMajor:0 DeviceMinor:311 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/19d9989080bb99254df4633b984ed6ac361fb3f67806322eddb375cdee316de2/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/629398d15647c2b03b039e1c1901983e50f62b43495a0b3d1356a29ab7579f04/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/96f6e8e91d7109dc966f1dd2cbd1b74212480a19ccee4443647cc163d94cfaba/userdata/shm DeviceMajor:0 DeviceMinor:50 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-81 DeviceMajor:0 DeviceMinor:81 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-158 DeviceMajor:0 DeviceMinor:158 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~projected/kube-api-access-k8l9r DeviceMajor:0 DeviceMinor:235 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/kube-api-access-zpdjh DeviceMajor:0 DeviceMinor:240 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2d368174-c659-444e-ba28-8fa267c0eda6/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:98 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~projected/kube-api-access-pqfj5 DeviceMajor:0 DeviceMinor:252 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1/userdata/shm DeviceMajor:0 DeviceMinor:41 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-321 DeviceMajor:0 DeviceMinor:321 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~projected/kube-api-access-4rg4g DeviceMajor:0 DeviceMinor:227 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:243 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~projected/kube-api-access-rrvhw DeviceMajor:0 DeviceMinor:256 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~projected/kube-api-access-smhrl DeviceMajor:0 DeviceMinor:229 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-329 DeviceMajor:0 DeviceMinor:329 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-154 DeviceMajor:0 DeviceMinor:154 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~projected/kube-api-access-b4qsk DeviceMajor:0 DeviceMinor:239 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de825527d944f688f2acf2625cf8789a7117e73fdf8ca84b446d4e5ce667dc74/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~projected/kube-api-access-t6wzz DeviceMajor:0 DeviceMinor:125 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~projected/kube-api-access-k5gc8 DeviceMajor:0 DeviceMinor:221 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~projected/kube-api-access-q5lg5 DeviceMajor:0 DeviceMinor:236 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-339 DeviceMajor:0 DeviceMinor:339 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~projected/kube-api-access-n58nf DeviceMajor:0 DeviceMinor:127 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-317 DeviceMajor:0 DeviceMinor:317 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-60 DeviceMajor:0 DeviceMinor:60 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:209 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-313 DeviceMajor:0 DeviceMinor:313 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~projected/kube-api-access-4bzs5 DeviceMajor:0 DeviceMinor:249 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~projected/kube-api-access-fz9qf DeviceMajor:0 DeviceMinor:225 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~projected/kube-api-access-ztdc9 DeviceMajor:0 DeviceMinor:226 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:232 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-319 DeviceMajor:0 DeviceMinor:319 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-110 DeviceMajor:0 DeviceMinor:110 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~projected/kube-api-access-98t5h DeviceMajor:0 DeviceMinor:250 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-309 DeviceMajor:0 DeviceMinor:309 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/458ff1fddfd5f3b95a485a4b0cb8e88a31c5825a6f8733cb5141f441c672f2be/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2098d43302ad0e00931b30fb0473a362fee9e9000b89c27552d72a632e47afbd/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ee1fa592b43fd04f438a18672ba5cbe2212eefd748a0d3d95e70d1fbb463e36/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/de46c12a-aa3e-442e-bcc4-365d05f50103/volumes/kubernetes.io~projected/kube-api-access-sjkgv DeviceMajor:0 DeviceMinor:101 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/kube-api-access-fhk76 DeviceMajor:0 DeviceMinor:224 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3678d76d6368f04d7424fd0ae731dc627699ae26c8d8180a738d9913435c9819/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~projected/kube-api-access-jc8xs DeviceMajor:0 DeviceMinor:230 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/343ebb9e9f7133e28dc8b97a72067095722cd38fc5a1cd6bd72819c24b19f9a4/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-327 DeviceMajor:0 DeviceMinor:327 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/69da0e58-2ae6-4d4b-b125-77e93df3d660/volumes/kubernetes.io~projected/kube-api-access-pzxv5 DeviceMajor:0 DeviceMinor:248 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-89 DeviceMajor:0 DeviceMinor:89 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~projected/kube-api-access-5xmqc DeviceMajor:0 DeviceMinor:99 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~projected/kube-api-access-8jkzq DeviceMajor:0 DeviceMinor:231 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f26f2fe408a83b7887b45acd945c90cef651bf2e6e61b90316af3ed0a1cd741e/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e/userdata/shm DeviceMajor:0 DeviceMinor:58 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec17a1f92974fc202f31cbb68ea7af983419d8c972a92fa5e88ff84c017f8e6d/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~projected/kube-api-access-txxbg DeviceMajor:0 DeviceMinor:139 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~projected/kube-api-access-4b8jr DeviceMajor:0 DeviceMinor:234 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/volumes/kubernetes.io~projected/kube-api-access-2dlx5 DeviceMajor:0 DeviceMinor:118 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d163333f-fda5-4067-ad7c-6f646ae411c8/volumes/kubernetes.io~projected/kube-api-access-v2jgj DeviceMajor:0 DeviceMinor:233 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-325 DeviceMajor:0 DeviceMinor:325 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-78 DeviceMajor:0 DeviceMinor:78 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~projected/kube-api-access-pj7cp DeviceMajor:0 DeviceMinor:123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-281 DeviceMajor:0 DeviceMinor:281 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~projected/kube-api-access-2nbvg DeviceMajor:0 DeviceMinor:228 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:237 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:19d9989080bb992 MacAddress:c2:e6:e1:c0:72:01 Speed:10000 Mtu:8900} {Name:1ee1fa592b43fd0 MacAddress:a6:fe:ec:1e:ff:9b Speed:10000 Mtu:8900} {Name:2697e850ca89be3 MacAddress:3e:30:1b:d5:63:bc Speed:10000 Mtu:8900} {Name:343ebb9e9f7133e MacAddress:12:89:57:f0:45:93 Speed:10000 Mtu:8900} {Name:3678d76d6368f04 MacAddress:16:22:30:dd:6a:07 Speed:10000 Mtu:8900} {Name:37840cae91bb388 MacAddress:5a:7c:f0:9e:5b:a8 Speed:10000 Mtu:8900} {Name:629398d15647c2b MacAddress:f2:33:cd:25:a8:21 Speed:10000 Mtu:8900} {Name:6923888f2474b26 MacAddress:7e:c0:a1:07:ff:81 Speed:10000 Mtu:8900} {Name:7477b641a786f08 MacAddress:d6:b4:43:9e:aa:1c Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:a2:da:ad:9d:3f:92 Speed:0 Mtu:8900} {Name:da609cd6cbb5b9e MacAddress:7a:82:86:d7:b0:5c Speed:10000 Mtu:8900} {Name:de825527d944f68 MacAddress:7e:b4:c7:9b:06:52 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:f6:d3:bd Speed:-1 Mtu:9000} {Name:f26f2fe408a83b7 MacAddress:c2:48:3b:93:a1:1f Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:d6:2f:ab:d3:f0:10 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 01:12:20.900052 master-0 kubenswrapper[7599]: I0313 01:12:20.900023 7599 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 01:12:20.900677 master-0 kubenswrapper[7599]: I0313 01:12:20.900352 7599 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 01:12:20.900933 master-0 kubenswrapper[7599]: I0313 01:12:20.900685 7599 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 01:12:20.900933 master-0 kubenswrapper[7599]: I0313 01:12:20.900885 7599 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 01:12:20.901220 master-0 kubenswrapper[7599]: I0313 01:12:20.900917 7599 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 01:12:20.901320 master-0 kubenswrapper[7599]: I0313 01:12:20.901232 7599 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 01:12:20.901320 master-0 kubenswrapper[7599]: I0313 01:12:20.901249 7599 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 01:12:20.901320 master-0 kubenswrapper[7599]: I0313 01:12:20.901261 7599 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 01:12:20.901320 master-0 kubenswrapper[7599]: I0313 01:12:20.901291 7599 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 01:12:20.901582 master-0 kubenswrapper[7599]: I0313 01:12:20.901470 7599 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:12:20.901643 master-0 kubenswrapper[7599]: I0313 01:12:20.901596 7599 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 01:12:20.901698 master-0 kubenswrapper[7599]: I0313 01:12:20.901666 7599 kubelet.go:418] "Attempting to sync node with API server" Mar 13 01:12:20.901698 master-0 kubenswrapper[7599]: I0313 01:12:20.901684 7599 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 01:12:20.901813 master-0 kubenswrapper[7599]: I0313 01:12:20.901704 7599 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 01:12:20.901813 master-0 kubenswrapper[7599]: I0313 01:12:20.901723 7599 kubelet.go:324] "Adding apiserver pod source" Mar 13 01:12:20.901813 master-0 kubenswrapper[7599]: I0313 01:12:20.901748 7599 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 01:12:20.903841 master-0 kubenswrapper[7599]: I0313 01:12:20.903787 7599 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 01:12:20.904117 master-0 kubenswrapper[7599]: I0313 01:12:20.904076 7599 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 01:12:20.904539 master-0 kubenswrapper[7599]: I0313 01:12:20.904474 7599 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 01:12:20.904709 master-0 kubenswrapper[7599]: I0313 01:12:20.904671 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 01:12:20.904709 master-0 kubenswrapper[7599]: I0313 01:12:20.904703 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 01:12:20.904709 master-0 kubenswrapper[7599]: I0313 01:12:20.904714 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904725 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904738 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904748 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904760 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904770 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904782 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904792 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904827 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 01:12:20.904891 master-0 kubenswrapper[7599]: I0313 01:12:20.904843 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 01:12:20.905351 master-0 kubenswrapper[7599]: I0313 01:12:20.904933 7599 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 01:12:20.905351 master-0 kubenswrapper[7599]: I0313 01:12:20.905337 7599 server.go:1280] "Started kubelet" Mar 13 01:12:20.907503 master-0 kubenswrapper[7599]: I0313 01:12:20.905553 7599 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 01:12:20.907503 master-0 kubenswrapper[7599]: I0313 01:12:20.905613 7599 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 01:12:20.907503 master-0 kubenswrapper[7599]: I0313 01:12:20.906393 7599 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 01:12:20.906821 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 01:12:20.921246 master-0 kubenswrapper[7599]: I0313 01:12:20.907259 7599 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 01:12:20.921246 master-0 kubenswrapper[7599]: I0313 01:12:20.917151 7599 server.go:449] "Adding debug handlers to kubelet server" Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.933475 7599 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.933588 7599 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.934817 7599 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 01:02:11 +0000 UTC, rotation deadline is 2026-03-13 21:36:24.660338293 +0000 UTC Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.934897 7599 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 20h24m3.725476694s for next certificate rotation Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.935089 7599 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.935101 7599 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.935249 7599 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.936153 7599 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 01:12:20.936929 master-0 kubenswrapper[7599]: I0313 01:12:20.936301 7599 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 01:12:20.938857 master-0 kubenswrapper[7599]: I0313 01:12:20.938809 7599 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 01:12:20.940305 master-0 kubenswrapper[7599]: E0313 01:12:20.935073 7599 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"master-0\" not found" Mar 13 01:12:20.941265 master-0 kubenswrapper[7599]: I0313 01:12:20.941164 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad2a6d5-6edf-4840-89f9-47847c8dac05" volumeName="kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 01:12:20.941265 master-0 kubenswrapper[7599]: I0313 01:12:20.941242 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 01:12:20.941265 master-0 kubenswrapper[7599]: I0313 01:12:20.941258 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="161d2fa6-a541-427a-a3e9-3297102a26f5" volumeName="kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941270 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" volumeName="kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941289 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941304 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fd82994-f4d4-49e9-8742-07e206322e76" volumeName="kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941317 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7" volumeName="kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941329 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91fc568a-61ad-400e-a54e-21d62e51bb17" volumeName="kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941351 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941383 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fde89b0b-7133-4b97-9e35-51c0382bd366" volumeName="kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941395 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941425 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941436 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6db75e5-efd1-4bfa-9941-0934d7621ba2" volumeName="kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941451 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbfc2caf-126e-41b9-9b31-05f7a45d8536" volumeName="kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941463 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46015913-c499-49b1-a9f6-a61c6e96b13f" volumeName="kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs" seLinuxMountContext="" Mar 13 01:12:20.941478 master-0 kubenswrapper[7599]: I0313 01:12:20.941475 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941575 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d163333f-fda5-4067-ad7c-6f646ae411c8" volumeName="kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941658 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941719 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941735 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91fc568a-61ad-400e-a54e-21d62e51bb17" volumeName="kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941747 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" volumeName="kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941760 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74efa52b-fd97-418a-9a44-914442633f74" volumeName="kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941773 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941786 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941799 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad2a6d5-6edf-4840-89f9-47847c8dac05" volumeName="kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941828 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b67a99-eada-44d7-93eb-cc3ced777fc6" volumeName="kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941844 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc" volumeName="kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941878 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" volumeName="kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941892 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fde89b0b-7133-4b97-9e35-51c0382bd366" volumeName="kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941904 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d368174-c659-444e-ba28-8fa267c0eda6" volumeName="kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941916 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75a53c09-210a-4346-99b0-a632b9e0a3c9" volumeName="kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941929 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941941 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91fc568a-61ad-400e-a54e-21d62e51bb17" volumeName="kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941973 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941985 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de46c12a-aa3e-442e-bcc4-365d05f50103" volumeName="kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.941998 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942011 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbfc2caf-126e-41b9-9b31-05f7a45d8536" volumeName="kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942040 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31f19d97-50f9-4486-a8f9-df61ef2b0528" volumeName="kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942052 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46015913-c499-49b1-a9f6-a61c6e96b13f" volumeName="kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942064 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942076 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942130 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7" volumeName="kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942144 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942178 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc" volumeName="kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942196 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6db75e5-efd1-4bfa-9941-0934d7621ba2" volumeName="kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942234 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de46c12a-aa3e-442e-bcc4-365d05f50103" volumeName="kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942247 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942260 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 01:12:20.942212 master-0 kubenswrapper[7599]: I0313 01:12:20.942272 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fd82994-f4d4-49e9-8742-07e206322e76" volumeName="kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942299 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74efa52b-fd97-418a-9a44-914442633f74" volumeName="kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942312 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75a53c09-210a-4346-99b0-a632b9e0a3c9" volumeName="kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942324 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942355 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" volumeName="kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942441 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59" volumeName="kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942454 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69da0e58-2ae6-4d4b-b125-77e93df3d660" volumeName="kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942468 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ad2904e-ece9-4d72-8683-c3e691e07497" volumeName="kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942480 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942548 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" volumeName="kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942562 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942575 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbfc2caf-126e-41b9-9b31-05f7a45d8536" volumeName="kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942589 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fd82994-f4d4-49e9-8742-07e206322e76" volumeName="kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942602 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942614 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942625 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942636 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" volumeName="kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942666 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74efa52b-fd97-418a-9a44-914442633f74" volumeName="kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942677 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75a53c09-210a-4346-99b0-a632b9e0a3c9" volumeName="kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942688 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942699 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942752 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b67a99-eada-44d7-93eb-cc3ced777fc6" volumeName="kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942765 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942776 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942796 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d874a21-43aa-4d81-b904-853fb3da5a94" volumeName="kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942848 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b67a99-eada-44d7-93eb-cc3ced777fc6" volumeName="kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942860 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de46c12a-aa3e-442e-bcc4-365d05f50103" volumeName="kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942919 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fde89b0b-7133-4b97-9e35-51c0382bd366" volumeName="kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942936 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69da0e58-2ae6-4d4b-b125-77e93df3d660" volumeName="kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942948 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6db75e5-efd1-4bfa-9941-0934d7621ba2" volumeName="kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942959 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" volumeName="kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942970 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" volumeName="kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.942981 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" volumeName="kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.943010 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" volumeName="kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.943022 7599 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2d368174-c659-444e-ba28-8fa267c0eda6" volumeName="kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access" seLinuxMountContext="" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.943033 7599 reconstruct.go:97] "Volume reconstruction finished" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: I0313 01:12:20.943044 7599 reconciler.go:26] "Reconciler: start to sync state" Mar 13 01:12:20.944395 master-0 kubenswrapper[7599]: E0313 01:12:20.943886 7599 kubelet.go:1495] "Image garbage collection failed once. Stats initialization may not have completed yet" err="failed to get imageFs info: unable to find data in memory cache" Mar 13 01:12:20.946487 master-0 kubenswrapper[7599]: I0313 01:12:20.945970 7599 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 01:12:20.949833 master-0 kubenswrapper[7599]: I0313 01:12:20.949774 7599 factory.go:55] Registering systemd factory Mar 13 01:12:20.949833 master-0 kubenswrapper[7599]: I0313 01:12:20.949836 7599 factory.go:221] Registration of the systemd container factory successfully Mar 13 01:12:20.951394 master-0 kubenswrapper[7599]: I0313 01:12:20.950387 7599 factory.go:153] Registering CRI-O factory Mar 13 01:12:20.951394 master-0 kubenswrapper[7599]: I0313 01:12:20.950424 7599 factory.go:221] Registration of the crio container factory successfully Mar 13 01:12:20.951394 master-0 kubenswrapper[7599]: I0313 01:12:20.950585 7599 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 01:12:20.951394 master-0 kubenswrapper[7599]: I0313 01:12:20.950620 7599 factory.go:103] Registering Raw factory Mar 13 01:12:20.951394 master-0 kubenswrapper[7599]: I0313 01:12:20.950647 7599 manager.go:1196] Started watching for new ooms in manager Mar 13 01:12:20.951394 master-0 kubenswrapper[7599]: I0313 01:12:20.951234 7599 manager.go:319] Starting recovery of all containers Mar 13 01:12:20.979311 master-0 kubenswrapper[7599]: I0313 01:12:20.979201 7599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 01:12:20.982063 master-0 kubenswrapper[7599]: I0313 01:12:20.982022 7599 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 01:12:20.982130 master-0 kubenswrapper[7599]: I0313 01:12:20.982067 7599 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 01:12:20.982130 master-0 kubenswrapper[7599]: I0313 01:12:20.982096 7599 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 01:12:20.982222 master-0 kubenswrapper[7599]: E0313 01:12:20.982148 7599 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 01:12:20.985964 master-0 kubenswrapper[7599]: I0313 01:12:20.985903 7599 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 01:12:20.996101 master-0 kubenswrapper[7599]: I0313 01:12:20.996042 7599 generic.go:334] "Generic (PLEG): container finished" podID="19460daa-7d22-4d32-899c-274b86c56a13" containerID="ffc5eb0505bcd1aede3306af3760c2bce7320e07eb88bcd177785bc53255cfa2" exitCode=0 Mar 13 01:12:21.009384 master-0 kubenswrapper[7599]: I0313 01:12:21.009329 7599 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="b6ea782ca75304abc2ccc9ab19e6d9b4a2889fe649ebf475c9c95d91d8dba102" exitCode=0 Mar 13 01:12:21.033672 master-0 kubenswrapper[7599]: I0313 01:12:21.033549 7599 generic.go:334] "Generic (PLEG): container finished" podID="348e0611-5b3c-4238-a571-813fc16057df" containerID="53dcbd61cdb4ba2de960bb2099fda9de5cc31628732654b744e0b56ff9b97460" exitCode=0 Mar 13 01:12:21.044238 master-0 kubenswrapper[7599]: I0313 01:12:21.044180 7599 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="0234ab75b7bd5b13b1837cf8436f89b14014ac9adcda65e897e6eb1551c1103a" exitCode=0 Mar 13 01:12:21.044369 master-0 kubenswrapper[7599]: I0313 01:12:21.044321 7599 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="2624aa9d22934134d13192016a21d94a8ed206c5e3cce209796939167e9e62b2" exitCode=0 Mar 13 01:12:21.044415 master-0 kubenswrapper[7599]: I0313 01:12:21.044401 7599 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="79b311e1fab325ef8d97bf345a46f71efc38634e77d8ae4e5e2904a28462f5b3" exitCode=0 Mar 13 01:12:21.044454 master-0 kubenswrapper[7599]: I0313 01:12:21.044416 7599 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="2b884799b97327428feac7cdc419e91ce2a3eaeb0bebe09185e54d595c2b45d1" exitCode=0 Mar 13 01:12:21.044454 master-0 kubenswrapper[7599]: I0313 01:12:21.044427 7599 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="1c472f002bfa4991c063677c722842d806f2f0b4d30948f00ee774d9c40c71d2" exitCode=0 Mar 13 01:12:21.044454 master-0 kubenswrapper[7599]: I0313 01:12:21.044441 7599 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="10183ca532088fab9b3fb6cb86be21e2b5c24c18173f81ce8ac9d9efb43524c5" exitCode=0 Mar 13 01:12:21.047725 master-0 kubenswrapper[7599]: I0313 01:12:21.047633 7599 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="b07ddec5ef3c1ac03f780236e9b354e58153c6ffb31f2047f7405a97d9d4d4c1" exitCode=0 Mar 13 01:12:21.050320 master-0 kubenswrapper[7599]: I0313 01:12:21.050274 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 01:12:21.052487 master-0 kubenswrapper[7599]: I0313 01:12:21.052426 7599 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c" exitCode=1 Mar 13 01:12:21.052487 master-0 kubenswrapper[7599]: I0313 01:12:21.052479 7599 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="a0021a247a97b068e059ad5f822a94ffb91a3ed3409e6c3e37ac414a6210ce2d" exitCode=0 Mar 13 01:12:21.058239 master-0 kubenswrapper[7599]: I0313 01:12:21.058136 7599 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="bbc1eef4848241d60b2e14297f83c2738656d477e9aca36b48290bd2306fa11f" exitCode=1 Mar 13 01:12:21.066988 master-0 kubenswrapper[7599]: I0313 01:12:21.066930 7599 generic.go:334] "Generic (PLEG): container finished" podID="49a28ab7-1176-4213-b037-19fe18bbe57b" containerID="84a75bf6c5b0aae138001278a5abd61d9c21955abcbf0e21925aa4e975040741" exitCode=0 Mar 13 01:12:21.082318 master-0 kubenswrapper[7599]: E0313 01:12:21.082246 7599 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 01:12:21.150189 master-0 kubenswrapper[7599]: I0313 01:12:21.150120 7599 manager.go:324] Recovery completed Mar 13 01:12:21.192008 master-0 kubenswrapper[7599]: I0313 01:12:21.191897 7599 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 01:12:21.192008 master-0 kubenswrapper[7599]: I0313 01:12:21.191959 7599 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 01:12:21.192008 master-0 kubenswrapper[7599]: I0313 01:12:21.192012 7599 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:12:21.192451 master-0 kubenswrapper[7599]: I0313 01:12:21.192377 7599 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 01:12:21.192451 master-0 kubenswrapper[7599]: I0313 01:12:21.192405 7599 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 01:12:21.192613 master-0 kubenswrapper[7599]: I0313 01:12:21.192457 7599 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 01:12:21.192613 master-0 kubenswrapper[7599]: I0313 01:12:21.192474 7599 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 01:12:21.192613 master-0 kubenswrapper[7599]: I0313 01:12:21.192490 7599 policy_none.go:49] "None policy: Start" Mar 13 01:12:21.198375 master-0 kubenswrapper[7599]: I0313 01:12:21.198289 7599 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 01:12:21.198375 master-0 kubenswrapper[7599]: I0313 01:12:21.198371 7599 state_mem.go:35] "Initializing new in-memory state store" Mar 13 01:12:21.199550 master-0 kubenswrapper[7599]: I0313 01:12:21.199431 7599 state_mem.go:75] "Updated machine memory state" Mar 13 01:12:21.199550 master-0 kubenswrapper[7599]: I0313 01:12:21.199463 7599 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 01:12:21.283298 master-0 kubenswrapper[7599]: E0313 01:12:21.283186 7599 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 01:12:21.298992 master-0 kubenswrapper[7599]: I0313 01:12:21.298883 7599 manager.go:334] "Starting Device Plugin manager" Mar 13 01:12:21.299284 master-0 kubenswrapper[7599]: I0313 01:12:21.299249 7599 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 01:12:21.299338 master-0 kubenswrapper[7599]: I0313 01:12:21.299286 7599 server.go:79] "Starting device plugin registration server" Mar 13 01:12:21.300072 master-0 kubenswrapper[7599]: I0313 01:12:21.300038 7599 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 01:12:21.300192 master-0 kubenswrapper[7599]: I0313 01:12:21.300069 7599 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 01:12:21.300405 master-0 kubenswrapper[7599]: I0313 01:12:21.300367 7599 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 01:12:21.300636 master-0 kubenswrapper[7599]: I0313 01:12:21.300590 7599 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 01:12:21.300636 master-0 kubenswrapper[7599]: I0313 01:12:21.300616 7599 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 01:12:21.401092 master-0 kubenswrapper[7599]: I0313 01:12:21.400990 7599 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:12:21.404024 master-0 kubenswrapper[7599]: I0313 01:12:21.403970 7599 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:12:21.404106 master-0 kubenswrapper[7599]: I0313 01:12:21.404032 7599 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:12:21.404106 master-0 kubenswrapper[7599]: I0313 01:12:21.404043 7599 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:12:21.404189 master-0 kubenswrapper[7599]: I0313 01:12:21.404116 7599 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:12:21.433395 master-0 kubenswrapper[7599]: I0313 01:12:21.432163 7599 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 01:12:21.433395 master-0 kubenswrapper[7599]: I0313 01:12:21.432427 7599 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 01:12:21.684009 master-0 kubenswrapper[7599]: I0313 01:12:21.683836 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d309d321e2b3c142df3b5753d507bff20af97e5f4ec76c20a22f4d71bfceba91" Mar 13 01:12:21.684009 master-0 kubenswrapper[7599]: I0313 01:12:21.683910 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-etcd/etcd-master-0-master-0","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","kube-system/bootstrap-kube-controller-manager-master-0","kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 01:12:21.684595 master-0 kubenswrapper[7599]: I0313 01:12:21.684465 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"8df1059c68299a3330235cc4d111397a59bfb0c4b40d95af664427109c129231"} Mar 13 01:12:21.684672 master-0 kubenswrapper[7599]: I0313 01:12:21.684603 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"6c9bd5245949231d7973259139b8774c20bbb32018502eb3bd133d4e8aa89584"} Mar 13 01:12:21.684672 master-0 kubenswrapper[7599]: I0313 01:12:21.684627 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerDied","Data":"b6ea782ca75304abc2ccc9ab19e6d9b4a2889fe649ebf475c9c95d91d8dba102"} Mar 13 01:12:21.684672 master-0 kubenswrapper[7599]: I0313 01:12:21.684662 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" event={"ID":"5f77c8e18b751d90bc0dfe2d4e304050","Type":"ContainerStarted","Data":"96f6e8e91d7109dc966f1dd2cbd1b74212480a19ccee4443647cc163d94cfaba"} Mar 13 01:12:21.684778 master-0 kubenswrapper[7599]: I0313 01:12:21.684683 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"a665d6a554bcc038bf3cf3aa905f1884c4c54fb9c32ce798ba9ecbaf1bab11e0"} Mar 13 01:12:21.684778 master-0 kubenswrapper[7599]: I0313 01:12:21.684707 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"2a446f182b10829874f21b28a6050799a0e95cf3b7880d6db31740a7140ff67b"} Mar 13 01:12:21.684778 master-0 kubenswrapper[7599]: I0313 01:12:21.684731 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0-master-0" event={"ID":"354f29997baa583b6238f7de9108ee10","Type":"ContainerStarted","Data":"6c67f7f67f1c3846811df64ad69df747ba5f98e7284620b7efb4801ff2425be1"} Mar 13 01:12:21.684908 master-0 kubenswrapper[7599]: I0313 01:12:21.684797 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a" Mar 13 01:12:21.684954 master-0 kubenswrapper[7599]: I0313 01:12:21.684868 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"42bca1f920cccc1592fa3eb549dd4fbc400b4f25b9bcf7ef0e6efb375c7c1e44"} Mar 13 01:12:21.684954 master-0 kubenswrapper[7599]: I0313 01:12:21.684943 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c"} Mar 13 01:12:21.685027 master-0 kubenswrapper[7599]: I0313 01:12:21.684965 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"a0021a247a97b068e059ad5f822a94ffb91a3ed3409e6c3e37ac414a6210ce2d"} Mar 13 01:12:21.685027 master-0 kubenswrapper[7599]: I0313 01:12:21.684986 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846"} Mar 13 01:12:21.685027 master-0 kubenswrapper[7599]: I0313 01:12:21.685005 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff"} Mar 13 01:12:21.685130 master-0 kubenswrapper[7599]: I0313 01:12:21.685029 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d"} Mar 13 01:12:21.685130 master-0 kubenswrapper[7599]: I0313 01:12:21.685048 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"bbc1eef4848241d60b2e14297f83c2738656d477e9aca36b48290bd2306fa11f"} Mar 13 01:12:21.685130 master-0 kubenswrapper[7599]: I0313 01:12:21.685071 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186"} Mar 13 01:12:21.685130 master-0 kubenswrapper[7599]: I0313 01:12:21.685090 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"41a562ba2a46ef687ff091bc533dc160a94bdc1572141710b80e92f2c08eb013"} Mar 13 01:12:21.685130 master-0 kubenswrapper[7599]: I0313 01:12:21.685110 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e"} Mar 13 01:12:21.706465 master-0 kubenswrapper[7599]: E0313 01:12:21.706386 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.707006 master-0 kubenswrapper[7599]: E0313 01:12:21.706975 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-scheduler-master-0\" already exists" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:12:21.707779 master-0 kubenswrapper[7599]: E0313 01:12:21.707734 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:12:21.707849 master-0 kubenswrapper[7599]: E0313 01:12:21.707774 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"bootstrap-kube-controller-manager-master-0\" already exists" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.708205 master-0 kubenswrapper[7599]: W0313 01:12:21.708182 7599 warnings.go:70] would violate PodSecurity "restricted:latest": host namespaces (hostNetwork=true), hostPort (container "etcd" uses hostPorts 2379, 2380), privileged (containers "etcdctl", "etcd" must not set securityContext.privileged=true), allowPrivilegeEscalation != false (containers "etcdctl", "etcd" must set securityContext.allowPrivilegeEscalation=false), unrestricted capabilities (containers "etcdctl", "etcd" must set securityContext.capabilities.drop=["ALL"]), restricted volume types (volumes "certs", "data-dir" use restricted volume type "hostPath"), runAsNonRoot != true (pod or containers "etcdctl", "etcd" must set securityContext.runAsNonRoot=true), seccompProfile (pod or containers "etcdctl", "etcd" must set securityContext.seccompProfile.type to "RuntimeDefault" or "Localhost") Mar 13 01:12:21.708358 master-0 kubenswrapper[7599]: E0313 01:12:21.708337 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0-master-0\" already exists" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:12:21.749932 master-0 kubenswrapper[7599]: I0313 01:12:21.749886 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.750183 master-0 kubenswrapper[7599]: I0313 01:12:21.750157 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.750296 master-0 kubenswrapper[7599]: I0313 01:12:21.750270 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.750418 master-0 kubenswrapper[7599]: I0313 01:12:21.750401 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:12:21.750565 master-0 kubenswrapper[7599]: I0313 01:12:21.750495 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:12:21.750741 master-0 kubenswrapper[7599]: I0313 01:12:21.750716 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.750885 master-0 kubenswrapper[7599]: I0313 01:12:21.750859 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.751003 master-0 kubenswrapper[7599]: I0313 01:12:21.750985 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.751121 master-0 kubenswrapper[7599]: I0313 01:12:21.751103 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.751245 master-0 kubenswrapper[7599]: I0313 01:12:21.751224 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.751387 master-0 kubenswrapper[7599]: I0313 01:12:21.751364 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:12:21.751561 master-0 kubenswrapper[7599]: I0313 01:12:21.751536 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:12:21.751677 master-0 kubenswrapper[7599]: I0313 01:12:21.751659 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.751814 master-0 kubenswrapper[7599]: I0313 01:12:21.751783 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:12:21.751959 master-0 kubenswrapper[7599]: I0313 01:12:21.751935 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.752115 master-0 kubenswrapper[7599]: I0313 01:12:21.752090 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:12:21.752263 master-0 kubenswrapper[7599]: I0313 01:12:21.752243 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.852679 master-0 kubenswrapper[7599]: I0313 01:12:21.852593 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.854169 master-0 kubenswrapper[7599]: I0313 01:12:21.852852 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.854169 master-0 kubenswrapper[7599]: I0313 01:12:21.853758 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.854169 master-0 kubenswrapper[7599]: I0313 01:12:21.853901 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.854169 master-0 kubenswrapper[7599]: I0313 01:12:21.853910 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.854169 master-0 kubenswrapper[7599]: I0313 01:12:21.853941 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.854169 master-0 kubenswrapper[7599]: I0313 01:12:21.853986 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.854169 master-0 kubenswrapper[7599]: I0313 01:12:21.853997 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.854437 master-0 kubenswrapper[7599]: I0313 01:12:21.854159 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.854437 master-0 kubenswrapper[7599]: I0313 01:12:21.854278 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:12:21.854437 master-0 kubenswrapper[7599]: I0313 01:12:21.854318 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.854437 master-0 kubenswrapper[7599]: I0313 01:12:21.854333 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:12:21.854437 master-0 kubenswrapper[7599]: I0313 01:12:21.854368 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.854676 master-0 kubenswrapper[7599]: I0313 01:12:21.854550 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:12:21.854676 master-0 kubenswrapper[7599]: I0313 01:12:21.854630 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.854676 master-0 kubenswrapper[7599]: I0313 01:12:21.854595 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:12:21.854854 master-0 kubenswrapper[7599]: I0313 01:12:21.854691 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:12:21.854854 master-0 kubenswrapper[7599]: I0313 01:12:21.854806 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"bootstrap-kube-scheduler-master-0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:12:21.854854 master-0 kubenswrapper[7599]: I0313 01:12:21.854833 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.855002 master-0 kubenswrapper[7599]: I0313 01:12:21.854875 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:12:21.855002 master-0 kubenswrapper[7599]: I0313 01:12:21.854889 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.855002 master-0 kubenswrapper[7599]: I0313 01:12:21.854909 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.855002 master-0 kubenswrapper[7599]: I0313 01:12:21.854967 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.855172 master-0 kubenswrapper[7599]: I0313 01:12:21.855009 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:12:21.855172 master-0 kubenswrapper[7599]: I0313 01:12:21.855048 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.855172 master-0 kubenswrapper[7599]: I0313 01:12:21.855087 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.855172 master-0 kubenswrapper[7599]: I0313 01:12:21.855099 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"bootstrap-kube-apiserver-master-0\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:21.855172 master-0 kubenswrapper[7599]: I0313 01:12:21.855129 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.855356 master-0 kubenswrapper[7599]: I0313 01:12:21.855193 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.855356 master-0 kubenswrapper[7599]: I0313 01:12:21.855206 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"bootstrap-kube-controller-manager-master-0\" (UID: \"f78c05e1499b533b83f091333d61f045\") " pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:21.855356 master-0 kubenswrapper[7599]: I0313 01:12:21.855237 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:12:21.855356 master-0 kubenswrapper[7599]: I0313 01:12:21.855275 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:12:21.855356 master-0 kubenswrapper[7599]: I0313 01:12:21.855327 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"etcd-master-0-master-0\" (UID: \"354f29997baa583b6238f7de9108ee10\") " pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:12:21.855356 master-0 kubenswrapper[7599]: I0313 01:12:21.855328 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:12:21.903052 master-0 kubenswrapper[7599]: I0313 01:12:21.902931 7599 apiserver.go:52] "Watching apiserver" Mar 13 01:12:21.916287 master-0 kubenswrapper[7599]: I0313 01:12:21.916184 7599 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 01:12:21.917577 master-0 kubenswrapper[7599]: I0313 01:12:21.917491 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn","openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs","openshift-network-operator/network-operator-7c649bf6d4-4zrk7","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp","openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7","openshift-multus/multus-xk75p","openshift-network-operator/iptables-alerter-mkkgg","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h","assisted-installer/assisted-installer-controller-qztx6","kube-system/bootstrap-kube-scheduler-master-0","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf","openshift-etcd/etcd-master-0-master-0","openshift-marketplace/marketplace-operator-64bf9778cb-bx29h","openshift-network-diagnostics/network-check-target-49pfj","openshift-ingress-operator/ingress-operator-677db989d6-p5c8r","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8","openshift-dns-operator/dns-operator-589895fbb7-wb6qq","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8","openshift-multus/multus-additional-cni-plugins-mjh5s","openshift-multus/multus-admission-controller-8d675b596-ddtwn","openshift-multus/network-metrics-daemon-9hwz9","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg","openshift-ovn-kubernetes/ovnkube-node-nlhbx","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g","openshift-config-operator/openshift-config-operator-64488f9d78-trr9r","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg","openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-network-node-identity/network-node-identity-mcps9","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq"] Mar 13 01:12:21.917946 master-0 kubenswrapper[7599]: I0313 01:12:21.917889 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:12:21.919103 master-0 kubenswrapper[7599]: I0313 01:12:21.918956 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:21.921458 master-0 kubenswrapper[7599]: I0313 01:12:21.920055 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:21.921458 master-0 kubenswrapper[7599]: I0313 01:12:21.920179 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:21.921458 master-0 kubenswrapper[7599]: I0313 01:12:21.920736 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:21.921458 master-0 kubenswrapper[7599]: I0313 01:12:21.921388 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:21.929021 master-0 kubenswrapper[7599]: I0313 01:12:21.928200 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.929021 master-0 kubenswrapper[7599]: I0313 01:12:21.928956 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:21.932095 master-0 kubenswrapper[7599]: I0313 01:12:21.929228 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:12:21.932095 master-0 kubenswrapper[7599]: I0313 01:12:21.931938 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 01:12:21.932095 master-0 kubenswrapper[7599]: I0313 01:12:21.929393 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:12:21.932350 master-0 kubenswrapper[7599]: I0313 01:12:21.930017 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 01:12:21.932350 master-0 kubenswrapper[7599]: I0313 01:12:21.930119 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 01:12:21.932350 master-0 kubenswrapper[7599]: I0313 01:12:21.932217 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 01:12:21.932350 master-0 kubenswrapper[7599]: I0313 01:12:21.930139 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 01:12:21.932666 master-0 kubenswrapper[7599]: I0313 01:12:21.930341 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 01:12:21.932666 master-0 kubenswrapper[7599]: I0313 01:12:21.930450 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.932666 master-0 kubenswrapper[7599]: I0313 01:12:21.930452 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.932666 master-0 kubenswrapper[7599]: I0313 01:12:21.930569 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 01:12:21.932666 master-0 kubenswrapper[7599]: I0313 01:12:21.930616 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 01:12:21.932968 master-0 kubenswrapper[7599]: I0313 01:12:21.932675 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.932968 master-0 kubenswrapper[7599]: I0313 01:12:21.930687 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 01:12:21.932968 master-0 kubenswrapper[7599]: I0313 01:12:21.930698 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.932968 master-0 kubenswrapper[7599]: I0313 01:12:21.930789 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 01:12:21.932968 master-0 kubenswrapper[7599]: I0313 01:12:21.930797 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 01:12:21.933329 master-0 kubenswrapper[7599]: I0313 01:12:21.930880 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 01:12:21.933329 master-0 kubenswrapper[7599]: I0313 01:12:21.930895 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 01:12:21.933329 master-0 kubenswrapper[7599]: I0313 01:12:21.930938 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 01:12:21.933329 master-0 kubenswrapper[7599]: I0313 01:12:21.930959 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 01:12:21.933329 master-0 kubenswrapper[7599]: I0313 01:12:21.930971 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.933329 master-0 kubenswrapper[7599]: I0313 01:12:21.931052 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 01:12:21.933868 master-0 kubenswrapper[7599]: I0313 01:12:21.931130 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 01:12:21.933868 master-0 kubenswrapper[7599]: I0313 01:12:21.933387 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 01:12:21.934810 master-0 kubenswrapper[7599]: I0313 01:12:21.931159 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 01:12:21.934810 master-0 kubenswrapper[7599]: I0313 01:12:21.934132 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 01:12:21.934810 master-0 kubenswrapper[7599]: I0313 01:12:21.931226 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 01:12:21.936770 master-0 kubenswrapper[7599]: I0313 01:12:21.931353 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.936770 master-0 kubenswrapper[7599]: I0313 01:12:21.931566 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 01:12:21.936770 master-0 kubenswrapper[7599]: I0313 01:12:21.936239 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:21.939015 master-0 kubenswrapper[7599]: I0313 01:12:21.937328 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:21.939015 master-0 kubenswrapper[7599]: I0313 01:12:21.931791 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 01:12:21.939015 master-0 kubenswrapper[7599]: I0313 01:12:21.933103 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 01:12:21.939991 master-0 kubenswrapper[7599]: I0313 01:12:21.939426 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:21.943884 master-0 kubenswrapper[7599]: I0313 01:12:21.943832 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:21.944386 master-0 kubenswrapper[7599]: I0313 01:12:21.944346 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 01:12:21.944617 master-0 kubenswrapper[7599]: I0313 01:12:21.944597 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 01:12:21.944706 master-0 kubenswrapper[7599]: I0313 01:12:21.944679 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 01:12:21.944913 master-0 kubenswrapper[7599]: I0313 01:12:21.944897 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.945037 master-0 kubenswrapper[7599]: I0313 01:12:21.945013 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 01:12:21.945192 master-0 kubenswrapper[7599]: I0313 01:12:21.945105 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.945192 master-0 kubenswrapper[7599]: I0313 01:12:21.945144 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 01:12:21.945192 master-0 kubenswrapper[7599]: I0313 01:12:21.945168 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.945366 master-0 kubenswrapper[7599]: I0313 01:12:21.945337 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 01:12:21.945440 master-0 kubenswrapper[7599]: I0313 01:12:21.945413 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 01:12:21.945665 master-0 kubenswrapper[7599]: I0313 01:12:21.945636 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 01:12:21.945878 master-0 kubenswrapper[7599]: I0313 01:12:21.945822 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:21.946188 master-0 kubenswrapper[7599]: I0313 01:12:21.944606 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 01:12:21.946257 master-0 kubenswrapper[7599]: I0313 01:12:21.944905 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 01:12:21.946394 master-0 kubenswrapper[7599]: I0313 01:12:21.946364 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:21.946580 master-0 kubenswrapper[7599]: I0313 01:12:21.946556 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 01:12:21.946860 master-0 kubenswrapper[7599]: I0313 01:12:21.946840 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.946950 master-0 kubenswrapper[7599]: I0313 01:12:21.946912 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 01:12:21.947000 master-0 kubenswrapper[7599]: I0313 01:12:21.946870 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 01:12:21.947108 master-0 kubenswrapper[7599]: I0313 01:12:21.946931 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 01:12:21.947226 master-0 kubenswrapper[7599]: I0313 01:12:21.947201 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 01:12:21.947292 master-0 kubenswrapper[7599]: I0313 01:12:21.947227 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:21.947881 master-0 kubenswrapper[7599]: I0313 01:12:21.947862 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 01:12:21.948096 master-0 kubenswrapper[7599]: I0313 01:12:21.948080 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 01:12:21.948945 master-0 kubenswrapper[7599]: I0313 01:12:21.948545 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 01:12:21.955352 master-0 kubenswrapper[7599]: I0313 01:12:21.952178 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 01:12:21.955352 master-0 kubenswrapper[7599]: I0313 01:12:21.953120 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 01:12:21.956009 master-0 kubenswrapper[7599]: I0313 01:12:21.955959 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 01:12:21.956738 master-0 kubenswrapper[7599]: I0313 01:12:21.956685 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.956848 master-0 kubenswrapper[7599]: I0313 01:12:21.956786 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 01:12:21.956898 master-0 kubenswrapper[7599]: I0313 01:12:21.956885 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 01:12:21.956942 master-0 kubenswrapper[7599]: I0313 01:12:21.956897 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 01:12:21.957020 master-0 kubenswrapper[7599]: I0313 01:12:21.955955 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.957144 master-0 kubenswrapper[7599]: I0313 01:12:21.957104 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.957325 master-0 kubenswrapper[7599]: I0313 01:12:21.957292 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 01:12:21.957439 master-0 kubenswrapper[7599]: I0313 01:12:21.957103 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhk76\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:21.957573 master-0 kubenswrapper[7599]: I0313 01:12:21.957500 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:21.957623 master-0 kubenswrapper[7599]: I0313 01:12:21.957597 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:21.957666 master-0 kubenswrapper[7599]: I0313 01:12:21.957643 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:21.957731 master-0 kubenswrapper[7599]: I0313 01:12:21.957702 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:21.957767 master-0 kubenswrapper[7599]: I0313 01:12:21.957750 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.957816 master-0 kubenswrapper[7599]: I0313 01:12:21.957789 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.957859 master-0 kubenswrapper[7599]: I0313 01:12:21.957834 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.957893 master-0 kubenswrapper[7599]: I0313 01:12:21.957879 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9qf\" (UniqueName: \"kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.957945 master-0 kubenswrapper[7599]: I0313 01:12:21.957917 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.957993 master-0 kubenswrapper[7599]: I0313 01:12:21.957964 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztdc9\" (UniqueName: \"kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.958109 master-0 kubenswrapper[7599]: I0313 01:12:21.958077 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:21.958158 master-0 kubenswrapper[7599]: I0313 01:12:21.958131 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.958195 master-0 kubenswrapper[7599]: I0313 01:12:21.958178 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:21.958267 master-0 kubenswrapper[7599]: I0313 01:12:21.958221 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.958302 master-0 kubenswrapper[7599]: I0313 01:12:21.958285 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:12:21.958336 master-0 kubenswrapper[7599]: I0313 01:12:21.957412 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 01:12:21.958336 master-0 kubenswrapper[7599]: I0313 01:12:21.958327 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:21.958392 master-0 kubenswrapper[7599]: I0313 01:12:21.958368 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.958429 master-0 kubenswrapper[7599]: I0313 01:12:21.958409 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.958475 master-0 kubenswrapper[7599]: I0313 01:12:21.958448 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlx5\" (UniqueName: \"kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.958549 master-0 kubenswrapper[7599]: I0313 01:12:21.958493 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:21.958583 master-0 kubenswrapper[7599]: I0313 01:12:21.958564 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:12:21.958622 master-0 kubenswrapper[7599]: I0313 01:12:21.958605 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.958669 master-0 kubenswrapper[7599]: I0313 01:12:21.958643 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5gc8\" (UniqueName: \"kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:21.958699 master-0 kubenswrapper[7599]: I0313 01:12:21.958686 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.958780 master-0 kubenswrapper[7599]: I0313 01:12:21.958718 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.958819 master-0 kubenswrapper[7599]: I0313 01:12:21.958801 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:21.958858 master-0 kubenswrapper[7599]: I0313 01:12:21.958842 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.958905 master-0 kubenswrapper[7599]: I0313 01:12:21.958876 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:21.959010 master-0 kubenswrapper[7599]: I0313 01:12:21.957555 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 01:12:21.959095 master-0 kubenswrapper[7599]: I0313 01:12:21.958882 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:21.959095 master-0 kubenswrapper[7599]: I0313 01:12:21.957599 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 01:12:21.959259 master-0 kubenswrapper[7599]: I0313 01:12:21.957762 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 01:12:21.959288 master-0 kubenswrapper[7599]: I0313 01:12:21.957902 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 01:12:21.959371 master-0 kubenswrapper[7599]: I0313 01:12:21.957958 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 01:12:21.959576 master-0 kubenswrapper[7599]: I0313 01:12:21.959540 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:21.960010 master-0 kubenswrapper[7599]: I0313 01:12:21.959977 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.960285 master-0 kubenswrapper[7599]: I0313 01:12:21.960255 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:12:21.960380 master-0 kubenswrapper[7599]: I0313 01:12:21.960325 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:21.960415 master-0 kubenswrapper[7599]: I0313 01:12:21.960309 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.960496 master-0 kubenswrapper[7599]: I0313 01:12:21.960452 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:21.960570 master-0 kubenswrapper[7599]: I0313 01:12:21.960541 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.960664 master-0 kubenswrapper[7599]: I0313 01:12:21.960629 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:21.960698 master-0 kubenswrapper[7599]: I0313 01:12:21.960677 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98t5h\" (UniqueName: \"kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:21.960731 master-0 kubenswrapper[7599]: I0313 01:12:21.960717 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkzq\" (UniqueName: \"kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:21.960782 master-0 kubenswrapper[7599]: I0313 01:12:21.960756 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:21.961069 master-0 kubenswrapper[7599]: I0313 01:12:21.960931 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.961104 master-0 kubenswrapper[7599]: I0313 01:12:21.960985 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:21.961193 master-0 kubenswrapper[7599]: I0313 01:12:21.961139 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.961256 master-0 kubenswrapper[7599]: I0313 01:12:21.961222 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqfj5\" (UniqueName: \"kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:21.961306 master-0 kubenswrapper[7599]: I0313 01:12:21.961266 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.961353 master-0 kubenswrapper[7599]: I0313 01:12:21.961233 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.961353 master-0 kubenswrapper[7599]: I0313 01:12:21.961306 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:21.961430 master-0 kubenswrapper[7599]: I0313 01:12:21.961398 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:21.961467 master-0 kubenswrapper[7599]: I0313 01:12:21.961429 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:21.961529 master-0 kubenswrapper[7599]: I0313 01:12:21.961477 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.961721 master-0 kubenswrapper[7599]: I0313 01:12:21.961662 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.961793 master-0 kubenswrapper[7599]: I0313 01:12:21.961728 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.961869 master-0 kubenswrapper[7599]: I0313 01:12:21.961805 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:21.961869 master-0 kubenswrapper[7599]: I0313 01:12:21.961841 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.961869 master-0 kubenswrapper[7599]: I0313 01:12:21.961864 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:21.962031 master-0 kubenswrapper[7599]: I0313 01:12:21.961884 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:21.962031 master-0 kubenswrapper[7599]: I0313 01:12:21.961923 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.962031 master-0 kubenswrapper[7599]: I0313 01:12:21.961927 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.962031 master-0 kubenswrapper[7599]: I0313 01:12:21.961943 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpdjh\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:21.962031 master-0 kubenswrapper[7599]: I0313 01:12:21.961958 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:21.962031 master-0 kubenswrapper[7599]: I0313 01:12:21.961982 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvhw\" (UniqueName: \"kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:21.962284 master-0 kubenswrapper[7599]: I0313 01:12:21.962127 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzs5\" (UniqueName: \"kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:21.962284 master-0 kubenswrapper[7599]: I0313 01:12:21.962184 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:21.962284 master-0 kubenswrapper[7599]: I0313 01:12:21.962222 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.962284 master-0 kubenswrapper[7599]: I0313 01:12:21.962260 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:21.962284 master-0 kubenswrapper[7599]: I0313 01:12:21.962266 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962299 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962329 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962343 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8xs\" (UniqueName: \"kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962401 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962426 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962447 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rg4g\" (UniqueName: \"kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962469 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962474 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962553 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:21.962591 master-0 kubenswrapper[7599]: I0313 01:12:21.962584 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.962996 master-0 kubenswrapper[7599]: I0313 01:12:21.962617 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 01:12:21.962996 master-0 kubenswrapper[7599]: I0313 01:12:21.962853 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:21.962996 master-0 kubenswrapper[7599]: I0313 01:12:21.962867 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 01:12:21.962996 master-0 kubenswrapper[7599]: I0313 01:12:21.962938 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:12:21.963141 master-0 kubenswrapper[7599]: I0313 01:12:21.963024 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:21.963141 master-0 kubenswrapper[7599]: I0313 01:12:21.963090 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.963141 master-0 kubenswrapper[7599]: I0313 01:12:21.963107 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:12:21.963141 master-0 kubenswrapper[7599]: I0313 01:12:21.963134 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkgv\" (UniqueName: \"kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.963634 master-0 kubenswrapper[7599]: I0313 01:12:21.963173 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:21.963634 master-0 kubenswrapper[7599]: I0313 01:12:21.963211 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:21.963634 master-0 kubenswrapper[7599]: I0313 01:12:21.963249 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.963634 master-0 kubenswrapper[7599]: I0313 01:12:21.963286 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:21.963634 master-0 kubenswrapper[7599]: I0313 01:12:21.963323 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:21.963634 master-0 kubenswrapper[7599]: I0313 01:12:21.963526 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:21.963888 master-0 kubenswrapper[7599]: I0313 01:12:21.963831 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:12:21.963934 master-0 kubenswrapper[7599]: I0313 01:12:21.963881 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.963934 master-0 kubenswrapper[7599]: I0313 01:12:21.963913 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:21.964021 master-0 kubenswrapper[7599]: I0313 01:12:21.963942 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:21.964021 master-0 kubenswrapper[7599]: I0313 01:12:21.963978 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:21.964021 master-0 kubenswrapper[7599]: I0313 01:12:21.964001 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:21.964129 master-0 kubenswrapper[7599]: I0313 01:12:21.964029 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jgj\" (UniqueName: \"kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj\") pod \"csi-snapshot-controller-operator-5685fbc7d-478l8\" (UID: \"d163333f-fda5-4067-ad7c-6f646ae411c8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:12:21.964129 master-0 kubenswrapper[7599]: I0313 01:12:21.964057 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.964129 master-0 kubenswrapper[7599]: I0313 01:12:21.964087 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:21.964129 master-0 kubenswrapper[7599]: I0313 01:12:21.964109 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:21.964279 master-0 kubenswrapper[7599]: I0313 01:12:21.964135 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.964279 master-0 kubenswrapper[7599]: I0313 01:12:21.964153 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 01:12:21.964279 master-0 kubenswrapper[7599]: I0313 01:12:21.964163 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.964392 master-0 kubenswrapper[7599]: I0313 01:12:21.964292 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 01:12:21.964392 master-0 kubenswrapper[7599]: I0313 01:12:21.964323 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:21.964392 master-0 kubenswrapper[7599]: I0313 01:12:21.964363 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:21.964392 master-0 kubenswrapper[7599]: I0313 01:12:21.964368 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smhrl\" (UniqueName: \"kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:21.964576 master-0 kubenswrapper[7599]: I0313 01:12:21.964421 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:21.964576 master-0 kubenswrapper[7599]: I0313 01:12:21.964463 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.964576 master-0 kubenswrapper[7599]: I0313 01:12:21.964329 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.965248 master-0 kubenswrapper[7599]: I0313 01:12:21.965189 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:21.965296 master-0 kubenswrapper[7599]: I0313 01:12:21.965271 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:21.965343 master-0 kubenswrapper[7599]: I0313 01:12:21.965320 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:21.965384 master-0 kubenswrapper[7599]: I0313 01:12:21.965363 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xmqc\" (UniqueName: \"kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:12:21.965429 master-0 kubenswrapper[7599]: I0313 01:12:21.965403 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:21.965470 master-0 kubenswrapper[7599]: I0313 01:12:21.965436 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.965536 master-0 kubenswrapper[7599]: I0313 01:12:21.965472 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:21.965536 master-0 kubenswrapper[7599]: I0313 01:12:21.965532 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:21.965623 master-0 kubenswrapper[7599]: I0313 01:12:21.965579 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b8jr\" (UniqueName: \"kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:21.965623 master-0 kubenswrapper[7599]: I0313 01:12:21.965591 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:21.965623 master-0 kubenswrapper[7599]: I0313 01:12:21.965611 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:21.965812 master-0 kubenswrapper[7599]: I0313 01:12:21.965775 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:21.965950 master-0 kubenswrapper[7599]: I0313 01:12:21.965920 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:21.966669 master-0 kubenswrapper[7599]: I0313 01:12:21.966626 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 01:12:21.966962 master-0 kubenswrapper[7599]: I0313 01:12:21.966899 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4qsk\" (UniqueName: \"kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:21.967019 master-0 kubenswrapper[7599]: I0313 01:12:21.966953 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:21.968030 master-0 kubenswrapper[7599]: I0313 01:12:21.967769 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:21.969605 master-0 kubenswrapper[7599]: I0313 01:12:21.968904 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:21.969605 master-0 kubenswrapper[7599]: I0313 01:12:21.969020 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 01:12:21.969605 master-0 kubenswrapper[7599]: I0313 01:12:21.969427 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 01:12:21.969605 master-0 kubenswrapper[7599]: I0313 01:12:21.969621 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8l9r\" (UniqueName: \"kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:21.970095 master-0 kubenswrapper[7599]: I0313 01:12:21.969674 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:21.970095 master-0 kubenswrapper[7599]: I0313 01:12:21.969785 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 01:12:21.970095 master-0 kubenswrapper[7599]: I0313 01:12:21.969922 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:21.970095 master-0 kubenswrapper[7599]: I0313 01:12:21.969965 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:21.970095 master-0 kubenswrapper[7599]: I0313 01:12:21.970094 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:21.970395 master-0 kubenswrapper[7599]: I0313 01:12:21.970135 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:21.970395 master-0 kubenswrapper[7599]: I0313 01:12:21.970241 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.973599 master-0 kubenswrapper[7599]: I0313 01:12:21.971368 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 01:12:21.973599 master-0 kubenswrapper[7599]: I0313 01:12:21.971394 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 01:12:21.973599 master-0 kubenswrapper[7599]: I0313 01:12:21.973211 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:21.974250 master-0 kubenswrapper[7599]: I0313 01:12:21.974207 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:21.974683 master-0 kubenswrapper[7599]: I0313 01:12:21.974630 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 01:12:21.975047 master-0 kubenswrapper[7599]: I0313 01:12:21.975004 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 01:12:21.975162 master-0 kubenswrapper[7599]: I0313 01:12:21.974709 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 01:12:21.975453 master-0 kubenswrapper[7599]: I0313 01:12:21.975423 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 01:12:21.979194 master-0 kubenswrapper[7599]: I0313 01:12:21.978076 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:21.979194 master-0 kubenswrapper[7599]: I0313 01:12:21.978646 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:21.979743 master-0 kubenswrapper[7599]: I0313 01:12:21.979712 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 01:12:21.980131 master-0 kubenswrapper[7599]: I0313 01:12:21.980109 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 01:12:21.980539 master-0 kubenswrapper[7599]: I0313 01:12:21.980524 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 01:12:21.981259 master-0 kubenswrapper[7599]: I0313 01:12:21.981219 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:21.981346 master-0 kubenswrapper[7599]: I0313 01:12:21.981300 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:21.982129 master-0 kubenswrapper[7599]: I0313 01:12:21.982099 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 01:12:21.982405 master-0 kubenswrapper[7599]: I0313 01:12:21.982365 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 01:12:21.982602 master-0 kubenswrapper[7599]: I0313 01:12:21.982572 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 01:12:21.982796 master-0 kubenswrapper[7599]: I0313 01:12:21.982728 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 01:12:21.982839 master-0 kubenswrapper[7599]: I0313 01:12:21.982799 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 01:12:21.982975 master-0 kubenswrapper[7599]: I0313 01:12:21.982954 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 01:12:21.983163 master-0 kubenswrapper[7599]: I0313 01:12:21.983082 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 01:12:21.983163 master-0 kubenswrapper[7599]: I0313 01:12:21.983095 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 01:12:21.983740 master-0 kubenswrapper[7599]: I0313 01:12:21.983299 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 01:12:21.983740 master-0 kubenswrapper[7599]: I0313 01:12:21.983387 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 01:12:21.983740 master-0 kubenswrapper[7599]: I0313 01:12:21.983649 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 01:12:21.983880 master-0 kubenswrapper[7599]: I0313 01:12:21.983743 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 01:12:21.985133 master-0 kubenswrapper[7599]: I0313 01:12:21.985088 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 01:12:21.987410 master-0 kubenswrapper[7599]: I0313 01:12:21.987362 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:22.009139 master-0 kubenswrapper[7599]: I0313 01:12:22.009086 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 01:12:22.033180 master-0 kubenswrapper[7599]: I0313 01:12:22.033108 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 01:12:22.037600 master-0 kubenswrapper[7599]: I0313 01:12:22.037563 7599 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 01:12:22.072313 master-0 kubenswrapper[7599]: I0313 01:12:22.072229 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.072578 master-0 kubenswrapper[7599]: I0313 01:12:22.072340 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.072578 master-0 kubenswrapper[7599]: I0313 01:12:22.072418 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:22.072578 master-0 kubenswrapper[7599]: I0313 01:12:22.072469 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.072578 master-0 kubenswrapper[7599]: I0313 01:12:22.072503 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:22.072722 master-0 kubenswrapper[7599]: I0313 01:12:22.072605 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.072722 master-0 kubenswrapper[7599]: I0313 01:12:22.072639 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.072722 master-0 kubenswrapper[7599]: I0313 01:12:22.072672 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:22.072801 master-0 kubenswrapper[7599]: I0313 01:12:22.072726 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.072801 master-0 kubenswrapper[7599]: I0313 01:12:22.072765 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7cp\" (UniqueName: \"kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:22.072852 master-0 kubenswrapper[7599]: I0313 01:12:22.072808 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:22.072884 master-0 kubenswrapper[7599]: I0313 01:12:22.072862 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.072912 master-0 kubenswrapper[7599]: I0313 01:12:22.072898 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.072946 master-0 kubenswrapper[7599]: I0313 01:12:22.072931 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.072978 master-0 kubenswrapper[7599]: I0313 01:12:22.072962 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.073040 master-0 kubenswrapper[7599]: I0313 01:12:22.073008 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.073074 master-0 kubenswrapper[7599]: I0313 01:12:22.073056 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wzz\" (UniqueName: \"kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.073104 master-0 kubenswrapper[7599]: I0313 01:12:22.073090 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:22.073149 master-0 kubenswrapper[7599]: I0313 01:12:22.073126 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:22.073197 master-0 kubenswrapper[7599]: I0313 01:12:22.073175 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.073231 master-0 kubenswrapper[7599]: I0313 01:12:22.073214 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.073311 master-0 kubenswrapper[7599]: I0313 01:12:22.073278 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.073352 master-0 kubenswrapper[7599]: I0313 01:12:22.073333 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:22.073419 master-0 kubenswrapper[7599]: I0313 01:12:22.073370 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.073453 master-0 kubenswrapper[7599]: I0313 01:12:22.073433 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.073484 master-0 kubenswrapper[7599]: I0313 01:12:22.073465 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:22.073553 master-0 kubenswrapper[7599]: I0313 01:12:22.073505 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.073727 master-0 kubenswrapper[7599]: I0313 01:12:22.073625 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:22.073727 master-0 kubenswrapper[7599]: I0313 01:12:22.073718 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.073815 master-0 kubenswrapper[7599]: I0313 01:12:22.073755 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.073815 master-0 kubenswrapper[7599]: I0313 01:12:22.073807 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: I0313 01:12:22.073844 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzxv5\" (UniqueName: \"kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: I0313 01:12:22.073939 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: I0313 01:12:22.073984 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: I0313 01:12:22.074034 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: I0313 01:12:22.074305 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: E0313 01:12:22.074336 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: E0313 01:12:22.074399 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: I0313 01:12:22.074437 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:22.074426 master-0 kubenswrapper[7599]: I0313 01:12:22.074449 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: E0313 01:12:22.074503 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: I0313 01:12:22.074574 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: I0313 01:12:22.074605 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: E0313 01:12:22.074717 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.574691474 +0000 UTC m=+1.846370878 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: I0313 01:12:22.074802 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: E0313 01:12:22.074870 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: I0313 01:12:22.074914 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.074964 master-0 kubenswrapper[7599]: I0313 01:12:22.074968 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: E0313 01:12:22.074980 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.574946069 +0000 UTC m=+1.846625673 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: E0313 01:12:22.075018 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.57500864 +0000 UTC m=+1.846688044 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075135 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: E0313 01:12:22.075185 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075199 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075245 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075249 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075269 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075195 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: E0313 01:12:22.075261 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.575235686 +0000 UTC m=+1.846915120 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075258 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075259 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075428 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075557 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075633 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: E0313 01:12:22.075713 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.575684635 +0000 UTC m=+1.847364069 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:22.075802 master-0 kubenswrapper[7599]: I0313 01:12:22.075761 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.075861 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.075958 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.076033 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.076123 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.076160 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.076193 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.076237 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.076268 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.076453 master-0 kubenswrapper[7599]: I0313 01:12:22.076382 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.076859 master-0 kubenswrapper[7599]: I0313 01:12:22.076583 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:22.076859 master-0 kubenswrapper[7599]: I0313 01:12:22.076634 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:22.076859 master-0 kubenswrapper[7599]: I0313 01:12:22.076692 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:22.076859 master-0 kubenswrapper[7599]: I0313 01:12:22.076742 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:22.076859 master-0 kubenswrapper[7599]: I0313 01:12:22.076777 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.076859 master-0 kubenswrapper[7599]: I0313 01:12:22.076833 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077014 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077054 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077089 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077127 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077158 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077190 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077224 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077268 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077302 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077336 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txxbg\" (UniqueName: \"kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077371 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58nf\" (UniqueName: \"kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077407 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077448 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.077570 master-0 kubenswrapper[7599]: I0313 01:12:22.077480 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.077611 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.077665 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.577647866 +0000 UTC m=+1.849327290 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.077726 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078129 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078189 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.578164797 +0000 UTC m=+1.849844191 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078234 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078275 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078327 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078346 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078348 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078371 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078407 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078411 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078555 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078570 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078627 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.578613166 +0000 UTC m=+1.850292570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078641 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078695 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078757 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.578740439 +0000 UTC m=+1.850419863 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078763 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078757 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.078942 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.578930273 +0000 UTC m=+1.850609887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.078972 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.079090 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.079175 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.079186 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: E0313 01:12:22.079223 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.579211109 +0000 UTC m=+1.850890523 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:22.079549 master-0 kubenswrapper[7599]: I0313 01:12:22.079253 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:22.087533 master-0 kubenswrapper[7599]: I0313 01:12:22.087485 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhk76\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:22.104237 master-0 kubenswrapper[7599]: I0313 01:12:22.104173 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:12:22.117788 master-0 kubenswrapper[7599]: I0313 01:12:22.117715 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9qf\" (UniqueName: \"kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:12:22.143000 master-0 kubenswrapper[7599]: I0313 01:12:22.142913 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlx5\" (UniqueName: \"kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:12:22.155744 master-0 kubenswrapper[7599]: I0313 01:12:22.155673 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztdc9\" (UniqueName: \"kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:12:22.180786 master-0 kubenswrapper[7599]: I0313 01:12:22.180725 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.180964 master-0 kubenswrapper[7599]: I0313 01:12:22.180894 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181062 master-0 kubenswrapper[7599]: I0313 01:12:22.180979 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181109 master-0 kubenswrapper[7599]: I0313 01:12:22.181083 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181183 master-0 kubenswrapper[7599]: I0313 01:12:22.181162 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181306 master-0 kubenswrapper[7599]: I0313 01:12:22.181226 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181550 master-0 kubenswrapper[7599]: I0313 01:12:22.181491 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:22.181620 master-0 kubenswrapper[7599]: I0313 01:12:22.181596 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181733 master-0 kubenswrapper[7599]: I0313 01:12:22.181690 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181804 master-0 kubenswrapper[7599]: I0313 01:12:22.181778 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181849 master-0 kubenswrapper[7599]: I0313 01:12:22.181828 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181886 master-0 kubenswrapper[7599]: I0313 01:12:22.181856 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.181916 master-0 kubenswrapper[7599]: I0313 01:12:22.181905 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.182073 master-0 kubenswrapper[7599]: I0313 01:12:22.182003 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.182073 master-0 kubenswrapper[7599]: I0313 01:12:22.182029 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.182133 master-0 kubenswrapper[7599]: I0313 01:12:22.182093 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:22.182164 master-0 kubenswrapper[7599]: I0313 01:12:22.182131 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:22.182194 master-0 kubenswrapper[7599]: I0313 01:12:22.182161 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.182262 master-0 kubenswrapper[7599]: I0313 01:12:22.182231 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.182311 master-0 kubenswrapper[7599]: I0313 01:12:22.182287 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.182408 master-0 kubenswrapper[7599]: I0313 01:12:22.182383 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.182439 master-0 kubenswrapper[7599]: I0313 01:12:22.182414 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:22.182573 master-0 kubenswrapper[7599]: I0313 01:12:22.182548 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:22.182698 master-0 kubenswrapper[7599]: E0313 01:12:22.182673 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 01:12:22.182761 master-0 kubenswrapper[7599]: E0313 01:12:22.182741 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.682721945 +0000 UTC m=+1.954401339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : secret "metrics-daemon-secret" not found Mar 13 01:12:22.183337 master-0 kubenswrapper[7599]: I0313 01:12:22.183287 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183401 master-0 kubenswrapper[7599]: I0313 01:12:22.183361 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183443 master-0 kubenswrapper[7599]: I0313 01:12:22.183414 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183495 master-0 kubenswrapper[7599]: I0313 01:12:22.183467 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183575 master-0 kubenswrapper[7599]: I0313 01:12:22.183542 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183628 master-0 kubenswrapper[7599]: I0313 01:12:22.183607 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183706 master-0 kubenswrapper[7599]: I0313 01:12:22.183666 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183751 master-0 kubenswrapper[7599]: I0313 01:12:22.183727 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.183823 master-0 kubenswrapper[7599]: I0313 01:12:22.183793 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.184013 master-0 kubenswrapper[7599]: E0313 01:12:22.183974 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:22.184083 master-0 kubenswrapper[7599]: E0313 01:12:22.184063 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:22.684034293 +0000 UTC m=+1.955713727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:22.184310 master-0 kubenswrapper[7599]: I0313 01:12:22.184267 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.184371 master-0 kubenswrapper[7599]: I0313 01:12:22.184344 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.184437 master-0 kubenswrapper[7599]: I0313 01:12:22.184409 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.187596 master-0 kubenswrapper[7599]: I0313 01:12:22.187489 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5gc8\" (UniqueName: \"kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:22.217119 master-0 kubenswrapper[7599]: I0313 01:12:22.217054 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkzq\" (UniqueName: \"kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:12:22.218200 master-0 kubenswrapper[7599]: I0313 01:12:22.218136 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98t5h\" (UniqueName: \"kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:22.262099 master-0 kubenswrapper[7599]: I0313 01:12:22.262034 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqfj5\" (UniqueName: \"kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:12:22.276931 master-0 kubenswrapper[7599]: I0313 01:12:22.276884 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpdjh\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:22.286556 master-0 kubenswrapper[7599]: I0313 01:12:22.286256 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvhw\" (UniqueName: \"kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:22.319570 master-0 kubenswrapper[7599]: I0313 01:12:22.319497 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:22.333366 master-0 kubenswrapper[7599]: I0313 01:12:22.333269 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzs5\" (UniqueName: \"kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:22.341723 master-0 kubenswrapper[7599]: I0313 01:12:22.341583 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:12:22.376227 master-0 kubenswrapper[7599]: I0313 01:12:22.374428 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8xs\" (UniqueName: \"kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:22.385069 master-0 kubenswrapper[7599]: I0313 01:12:22.383732 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rg4g\" (UniqueName: \"kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:12:22.400971 master-0 kubenswrapper[7599]: I0313 01:12:22.400880 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:12:22.427969 master-0 kubenswrapper[7599]: I0313 01:12:22.427846 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkgv\" (UniqueName: \"kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:12:22.459050 master-0 kubenswrapper[7599]: I0313 01:12:22.459005 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smhrl\" (UniqueName: \"kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:12:22.462968 master-0 kubenswrapper[7599]: I0313 01:12:22.462945 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:22.480450 master-0 kubenswrapper[7599]: I0313 01:12:22.480425 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:12:22.511938 master-0 kubenswrapper[7599]: I0313 01:12:22.511874 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xmqc\" (UniqueName: \"kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:12:22.521471 master-0 kubenswrapper[7599]: I0313 01:12:22.521405 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:22.551196 master-0 kubenswrapper[7599]: I0313 01:12:22.550659 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jgj\" (UniqueName: \"kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj\") pod \"csi-snapshot-controller-operator-5685fbc7d-478l8\" (UID: \"d163333f-fda5-4067-ad7c-6f646ae411c8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:12:22.563142 master-0 kubenswrapper[7599]: I0313 01:12:22.563093 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b8jr\" (UniqueName: \"kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:22.582742 master-0 kubenswrapper[7599]: I0313 01:12:22.582692 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4qsk\" (UniqueName: \"kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:22.597051 master-0 kubenswrapper[7599]: I0313 01:12:22.596833 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:22.597254 master-0 kubenswrapper[7599]: I0313 01:12:22.597222 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:22.597486 master-0 kubenswrapper[7599]: I0313 01:12:22.597443 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:22.597723 master-0 kubenswrapper[7599]: I0313 01:12:22.597694 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:22.597850 master-0 kubenswrapper[7599]: E0313 01:12:22.597220 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:22.598024 master-0 kubenswrapper[7599]: E0313 01:12:22.598002 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.597973596 +0000 UTC m=+2.869653050 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:22.598181 master-0 kubenswrapper[7599]: E0313 01:12:22.597307 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:22.598339 master-0 kubenswrapper[7599]: I0313 01:12:22.598311 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:22.598462 master-0 kubenswrapper[7599]: E0313 01:12:22.598329 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.598284953 +0000 UTC m=+2.869964427 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:22.598661 master-0 kubenswrapper[7599]: E0313 01:12:22.597651 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:22.598745 master-0 kubenswrapper[7599]: E0313 01:12:22.597905 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:22.598745 master-0 kubenswrapper[7599]: E0313 01:12:22.598456 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:22.598977 master-0 kubenswrapper[7599]: I0313 01:12:22.598948 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:22.599187 master-0 kubenswrapper[7599]: E0313 01:12:22.599140 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:22.599187 master-0 kubenswrapper[7599]: E0313 01:12:22.599173 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.59910965 +0000 UTC m=+2.870789224 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:22.599349 master-0 kubenswrapper[7599]: E0313 01:12:22.599221 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.599203532 +0000 UTC m=+2.870882936 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:22.599349 master-0 kubenswrapper[7599]: E0313 01:12:22.599246 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.599236463 +0000 UTC m=+2.870915867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:22.599349 master-0 kubenswrapper[7599]: E0313 01:12:22.599266 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.599257113 +0000 UTC m=+2.870936517 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:22.599349 master-0 kubenswrapper[7599]: I0313 01:12:22.599302 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:22.599696 master-0 kubenswrapper[7599]: I0313 01:12:22.599397 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:22.599696 master-0 kubenswrapper[7599]: I0313 01:12:22.599441 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:22.599696 master-0 kubenswrapper[7599]: I0313 01:12:22.599543 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:22.599696 master-0 kubenswrapper[7599]: I0313 01:12:22.599607 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:22.599992 master-0 kubenswrapper[7599]: E0313 01:12:22.599727 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:22.599992 master-0 kubenswrapper[7599]: E0313 01:12:22.599760 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.599751063 +0000 UTC m=+2.871430467 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:22.600186 master-0 kubenswrapper[7599]: E0313 01:12:22.600151 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:22.600283 master-0 kubenswrapper[7599]: E0313 01:12:22.600200 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.600188313 +0000 UTC m=+2.871867717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:22.600594 master-0 kubenswrapper[7599]: E0313 01:12:22.600561 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:22.600794 master-0 kubenswrapper[7599]: E0313 01:12:22.600770 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.600748695 +0000 UTC m=+2.872428119 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:22.600926 master-0 kubenswrapper[7599]: E0313 01:12:22.600673 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:22.601090 master-0 kubenswrapper[7599]: E0313 01:12:22.601068 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.601048171 +0000 UTC m=+2.872727775 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:22.601210 master-0 kubenswrapper[7599]: E0313 01:12:22.600812 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:22.601381 master-0 kubenswrapper[7599]: E0313 01:12:22.601360 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.601343567 +0000 UTC m=+2.873023001 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:22.602232 master-0 kubenswrapper[7599]: I0313 01:12:22.602180 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8l9r\" (UniqueName: \"kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:22.611675 master-0 kubenswrapper[7599]: I0313 01:12:22.611618 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:22.646465 master-0 kubenswrapper[7599]: I0313 01:12:22.646367 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzxv5\" (UniqueName: \"kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:12:22.671062 master-0 kubenswrapper[7599]: I0313 01:12:22.671010 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7cp\" (UniqueName: \"kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:22.681491 master-0 kubenswrapper[7599]: I0313 01:12:22.681451 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wzz\" (UniqueName: \"kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:12:22.700640 master-0 kubenswrapper[7599]: I0313 01:12:22.700546 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:22.700789 master-0 kubenswrapper[7599]: I0313 01:12:22.700704 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:22.700848 master-0 kubenswrapper[7599]: I0313 01:12:22.700800 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:22.700991 master-0 kubenswrapper[7599]: E0313 01:12:22.700961 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:22.701063 master-0 kubenswrapper[7599]: E0313 01:12:22.701026 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 01:12:22.701144 master-0 kubenswrapper[7599]: E0313 01:12:22.701049 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.701027113 +0000 UTC m=+2.972706717 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:22.701201 master-0 kubenswrapper[7599]: E0313 01:12:22.701147 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:12:23.701120795 +0000 UTC m=+2.972800199 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : secret "metrics-daemon-secret" not found Mar 13 01:12:22.732431 master-0 kubenswrapper[7599]: I0313 01:12:22.731907 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txxbg\" (UniqueName: \"kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:12:22.742991 master-0 kubenswrapper[7599]: I0313 01:12:22.741972 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58nf\" (UniqueName: \"kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:22.769316 master-0 kubenswrapper[7599]: I0313 01:12:22.767689 7599 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 01:12:22.777590 master-0 kubenswrapper[7599]: I0313 01:12:22.777537 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:22.822766 master-0 kubenswrapper[7599]: I0313 01:12:22.822404 7599 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 01:12:22.849547 master-0 kubenswrapper[7599]: I0313 01:12:22.847651 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:23.220430 master-0 kubenswrapper[7599]: I0313 01:12:23.217389 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:23.230575 master-0 kubenswrapper[7599]: I0313 01:12:23.228384 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:23.241568 master-0 kubenswrapper[7599]: I0313 01:12:23.236083 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" event={"ID":"74efa52b-fd97-418a-9a44-914442633f74","Type":"ContainerStarted","Data":"e36d289d22f168d7dd54b3be83741c3fa40edda0e8989b419788c91296bea849"} Mar 13 01:12:23.241568 master-0 kubenswrapper[7599]: I0313 01:12:23.240323 7599 generic.go:334] "Generic (PLEG): container finished" podID="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" containerID="d71905c580f15e2bd3a3f12e29fbae0f3bf41f215518cae86b4ede0ed005dd7f" exitCode=0 Mar 13 01:12:23.241568 master-0 kubenswrapper[7599]: I0313 01:12:23.240366 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" event={"ID":"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a","Type":"ContainerDied","Data":"d71905c580f15e2bd3a3f12e29fbae0f3bf41f215518cae86b4ede0ed005dd7f"} Mar 13 01:12:23.250766 master-0 kubenswrapper[7599]: I0313 01:12:23.248150 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerStarted","Data":"f73c75626f2b8420b208819100f67cc78e1afc63da934e6341110ce6fd48cd90"} Mar 13 01:12:23.251822 master-0 kubenswrapper[7599]: I0313 01:12:23.251675 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" event={"ID":"b5757329-8692-4719-b3c7-b5df78110fcf","Type":"ContainerStarted","Data":"9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d"} Mar 13 01:12:23.261972 master-0 kubenswrapper[7599]: I0313 01:12:23.261872 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" event={"ID":"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b","Type":"ContainerStarted","Data":"b30ae4d37e850868384d04498318b52f585a63274ae43d082fa8cb4389cea8b3"} Mar 13 01:12:23.273706 master-0 kubenswrapper[7599]: I0313 01:12:23.273579 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" event={"ID":"96b67a99-eada-44d7-93eb-cc3ced777fc6","Type":"ContainerStarted","Data":"cc1038b189ab36843989b837c930bbf20934f08cf043e09fd788646b7d078f2a"} Mar 13 01:12:23.309293 master-0 kubenswrapper[7599]: I0313 01:12:23.309179 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-n9vpf"] Mar 13 01:12:23.309446 master-0 kubenswrapper[7599]: E0313 01:12:23.309376 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:12:23.309446 master-0 kubenswrapper[7599]: I0313 01:12:23.309389 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:12:23.309446 master-0 kubenswrapper[7599]: E0313 01:12:23.309403 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:12:23.309446 master-0 kubenswrapper[7599]: I0313 01:12:23.309410 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:12:23.309614 master-0 kubenswrapper[7599]: I0313 01:12:23.309521 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:12:23.309614 master-0 kubenswrapper[7599]: I0313 01:12:23.309538 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:12:23.310589 master-0 kubenswrapper[7599]: I0313 01:12:23.309849 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.317779 master-0 kubenswrapper[7599]: I0313 01:12:23.317634 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 01:12:23.317993 master-0 kubenswrapper[7599]: I0313 01:12:23.317956 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 01:12:23.318144 master-0 kubenswrapper[7599]: I0313 01:12:23.318101 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 01:12:23.318144 master-0 kubenswrapper[7599]: I0313 01:12:23.318122 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 01:12:23.328828 master-0 kubenswrapper[7599]: I0313 01:12:23.328727 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-n9vpf"] Mar 13 01:12:23.384870 master-0 kubenswrapper[7599]: I0313 01:12:23.384081 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:23.404361 master-0 kubenswrapper[7599]: I0313 01:12:23.403956 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:23.418073 master-0 kubenswrapper[7599]: I0313 01:12:23.418031 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:23.419461 master-0 kubenswrapper[7599]: I0313 01:12:23.419419 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d89b5d71-5522-433e-a0bb-f2767332e744-signing-cabundle\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.423978 master-0 kubenswrapper[7599]: I0313 01:12:23.423932 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmnh2\" (UniqueName: \"kubernetes.io/projected/d89b5d71-5522-433e-a0bb-f2767332e744-kube-api-access-lmnh2\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.424218 master-0 kubenswrapper[7599]: I0313 01:12:23.424189 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d89b5d71-5522-433e-a0bb-f2767332e744-signing-key\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.463541 master-0 kubenswrapper[7599]: I0313 01:12:23.456850 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:23.507049 master-0 kubenswrapper[7599]: I0313 01:12:23.506958 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-target-49pfj"] Mar 13 01:12:23.526110 master-0 kubenswrapper[7599]: I0313 01:12:23.525693 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmnh2\" (UniqueName: \"kubernetes.io/projected/d89b5d71-5522-433e-a0bb-f2767332e744-kube-api-access-lmnh2\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.526110 master-0 kubenswrapper[7599]: I0313 01:12:23.525820 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d89b5d71-5522-433e-a0bb-f2767332e744-signing-key\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.526110 master-0 kubenswrapper[7599]: I0313 01:12:23.526024 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d89b5d71-5522-433e-a0bb-f2767332e744-signing-cabundle\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.540617 master-0 kubenswrapper[7599]: I0313 01:12:23.540377 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d89b5d71-5522-433e-a0bb-f2767332e744-signing-cabundle\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.547678 master-0 kubenswrapper[7599]: I0313 01:12:23.547344 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d89b5d71-5522-433e-a0bb-f2767332e744-signing-key\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.565665 master-0 kubenswrapper[7599]: I0313 01:12:23.565374 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmnh2\" (UniqueName: \"kubernetes.io/projected/d89b5d71-5522-433e-a0bb-f2767332e744-kube-api-access-lmnh2\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.633070 master-0 kubenswrapper[7599]: I0313 01:12:23.632588 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:23.633228 master-0 kubenswrapper[7599]: I0313 01:12:23.633095 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:23.633228 master-0 kubenswrapper[7599]: I0313 01:12:23.633128 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:23.633228 master-0 kubenswrapper[7599]: I0313 01:12:23.633172 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:23.633228 master-0 kubenswrapper[7599]: E0313 01:12:23.633174 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:23.633228 master-0 kubenswrapper[7599]: I0313 01:12:23.633206 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:23.633377 master-0 kubenswrapper[7599]: I0313 01:12:23.633237 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:23.633377 master-0 kubenswrapper[7599]: E0313 01:12:23.633269 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633242192 +0000 UTC m=+4.904921676 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:23.633377 master-0 kubenswrapper[7599]: I0313 01:12:23.633315 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:23.633377 master-0 kubenswrapper[7599]: I0313 01:12:23.633355 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:23.633489 master-0 kubenswrapper[7599]: E0313 01:12:23.633389 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:23.633489 master-0 kubenswrapper[7599]: I0313 01:12:23.633415 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:23.633489 master-0 kubenswrapper[7599]: E0313 01:12:23.633469 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633442796 +0000 UTC m=+4.905122270 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:23.633644 master-0 kubenswrapper[7599]: I0313 01:12:23.633498 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:23.633644 master-0 kubenswrapper[7599]: I0313 01:12:23.633566 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:23.633697 master-0 kubenswrapper[7599]: E0313 01:12:23.633646 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:23.633697 master-0 kubenswrapper[7599]: E0313 01:12:23.633673 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633663402 +0000 UTC m=+4.905342786 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:23.633755 master-0 kubenswrapper[7599]: E0313 01:12:23.633720 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:23.633755 master-0 kubenswrapper[7599]: E0313 01:12:23.633741 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633734873 +0000 UTC m=+4.905414267 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:23.633809 master-0 kubenswrapper[7599]: E0313 01:12:23.633778 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:23.633809 master-0 kubenswrapper[7599]: E0313 01:12:23.633799 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633794244 +0000 UTC m=+4.905473628 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:23.633867 master-0 kubenswrapper[7599]: E0313 01:12:23.633832 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:23.633867 master-0 kubenswrapper[7599]: E0313 01:12:23.633850 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633845545 +0000 UTC m=+4.905524939 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:23.633929 master-0 kubenswrapper[7599]: E0313 01:12:23.633888 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:23.633929 master-0 kubenswrapper[7599]: E0313 01:12:23.633908 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633902116 +0000 UTC m=+4.905581510 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:23.633983 master-0 kubenswrapper[7599]: E0313 01:12:23.633942 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:23.633983 master-0 kubenswrapper[7599]: E0313 01:12:23.633961 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.633956298 +0000 UTC m=+4.905635692 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:23.636377 master-0 kubenswrapper[7599]: E0313 01:12:23.634144 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:23.636377 master-0 kubenswrapper[7599]: E0313 01:12:23.634295 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.634256784 +0000 UTC m=+4.905936178 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:23.636377 master-0 kubenswrapper[7599]: E0313 01:12:23.634361 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:23.636377 master-0 kubenswrapper[7599]: E0313 01:12:23.634403 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.634379956 +0000 UTC m=+4.906059350 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:23.636377 master-0 kubenswrapper[7599]: E0313 01:12:23.636036 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:23.636377 master-0 kubenswrapper[7599]: E0313 01:12:23.636089 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.636074122 +0000 UTC m=+4.907753586 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:23.689545 master-0 kubenswrapper[7599]: I0313 01:12:23.689476 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:12:23.735919 master-0 kubenswrapper[7599]: I0313 01:12:23.734917 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:23.735919 master-0 kubenswrapper[7599]: I0313 01:12:23.735042 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:23.735919 master-0 kubenswrapper[7599]: E0313 01:12:23.735175 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 01:12:23.735919 master-0 kubenswrapper[7599]: E0313 01:12:23.735246 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.735229196 +0000 UTC m=+5.006908590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : secret "metrics-daemon-secret" not found Mar 13 01:12:23.735919 master-0 kubenswrapper[7599]: E0313 01:12:23.735293 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:23.735919 master-0 kubenswrapper[7599]: E0313 01:12:23.735314 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:25.735306498 +0000 UTC m=+5.006985892 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:23.919206 master-0 kubenswrapper[7599]: I0313 01:12:23.919123 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-84bfdbbb7f-n9vpf"] Mar 13 01:12:24.223265 master-0 kubenswrapper[7599]: I0313 01:12:24.222827 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:24.258567 master-0 kubenswrapper[7599]: I0313 01:12:24.254658 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:24.297825 master-0 kubenswrapper[7599]: I0313 01:12:24.297764 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-49pfj" event={"ID":"34889110-f282-4c2c-a2b0-620033559e1b","Type":"ContainerStarted","Data":"d52a6a22ec0123e055651b76baefe3823fc66bf55b6fa9bd2da384480e4ca0d4"} Mar 13 01:12:24.298115 master-0 kubenswrapper[7599]: I0313 01:12:24.298101 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-49pfj" event={"ID":"34889110-f282-4c2c-a2b0-620033559e1b","Type":"ContainerStarted","Data":"72c7baf13da514fc8287177e18c17708037dccda828bfe98993c839421246be0"} Mar 13 01:12:24.310895 master-0 kubenswrapper[7599]: I0313 01:12:24.302891 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" event={"ID":"c6db75e5-efd1-4bfa-9941-0934d7621ba2","Type":"ContainerStarted","Data":"c248d157af93f66dc74e732d276f334cdb9f66f93ff85dda8f8ef75466a1cda2"} Mar 13 01:12:24.310895 master-0 kubenswrapper[7599]: I0313 01:12:24.306649 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" event={"ID":"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea","Type":"ContainerStarted","Data":"db75a500d25df1d35034bc9e7d835e3af06e992e3af2605476ce0e45095ba6b9"} Mar 13 01:12:24.310895 master-0 kubenswrapper[7599]: I0313 01:12:24.309120 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" event={"ID":"d163333f-fda5-4067-ad7c-6f646ae411c8","Type":"ContainerStarted","Data":"dcc703093990b8e00276e73b190b8ad660be51c65de2d9d1fcf3dcb04c926632"} Mar 13 01:12:24.314834 master-0 kubenswrapper[7599]: I0313 01:12:24.311794 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" event={"ID":"d89b5d71-5522-433e-a0bb-f2767332e744","Type":"ContainerStarted","Data":"912e2ac272264595803facca85d0a19fa4209461cc659cb846081d6f6238b07e"} Mar 13 01:12:24.314834 master-0 kubenswrapper[7599]: I0313 01:12:24.311821 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" event={"ID":"d89b5d71-5522-433e-a0bb-f2767332e744","Type":"ContainerStarted","Data":"a1c5dbaa4dceb86f442ef113d610b47a414073825f45b1abbdb54ba9c2a0c83a"} Mar 13 01:12:24.314834 master-0 kubenswrapper[7599]: I0313 01:12:24.312366 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:12:24.314834 master-0 kubenswrapper[7599]: I0313 01:12:24.312381 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:12:24.342925 master-0 kubenswrapper[7599]: I0313 01:12:24.341944 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" podStartSLOduration=1.341922042 podStartE2EDuration="1.341922042s" podCreationTimestamp="2026-03-13 01:12:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:24.339934711 +0000 UTC m=+3.611614105" watchObservedRunningTime="2026-03-13 01:12:24.341922042 +0000 UTC m=+3.613601436" Mar 13 01:12:24.706180 master-0 kubenswrapper[7599]: I0313 01:12:24.705978 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:12:25.318541 master-0 kubenswrapper[7599]: I0313 01:12:25.315533 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:12:25.605260 master-0 kubenswrapper[7599]: I0313 01:12:25.605016 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9"] Mar 13 01:12:25.605697 master-0 kubenswrapper[7599]: I0313 01:12:25.605657 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" Mar 13 01:12:25.609179 master-0 kubenswrapper[7599]: I0313 01:12:25.609118 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 01:12:25.609983 master-0 kubenswrapper[7599]: I0313 01:12:25.609956 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 01:12:25.618572 master-0 kubenswrapper[7599]: I0313 01:12:25.618484 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9"] Mar 13 01:12:25.673574 master-0 kubenswrapper[7599]: I0313 01:12:25.673495 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:25.673853 master-0 kubenswrapper[7599]: E0313 01:12:25.673735 7599 secret.go:189] Couldn't get secret openshift-dns-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:25.673853 master-0 kubenswrapper[7599]: I0313 01:12:25.673779 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:25.673853 master-0 kubenswrapper[7599]: E0313 01:12:25.673852 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls podName:7d874a21-43aa-4d81-b904-853fb3da5a94 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.673823536 +0000 UTC m=+8.945502930 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls") pod "dns-operator-589895fbb7-wb6qq" (UID: "7d874a21-43aa-4d81-b904-853fb3da5a94") : secret "metrics-tls" not found Mar 13 01:12:25.673970 master-0 kubenswrapper[7599]: I0313 01:12:25.673877 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:25.673970 master-0 kubenswrapper[7599]: E0313 01:12:25.673894 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:25.673970 master-0 kubenswrapper[7599]: I0313 01:12:25.673924 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:25.673970 master-0 kubenswrapper[7599]: E0313 01:12:25.673969 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.673946628 +0000 UTC m=+8.945626022 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:25.674080 master-0 kubenswrapper[7599]: I0313 01:12:25.674005 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:25.674080 master-0 kubenswrapper[7599]: I0313 01:12:25.674039 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:25.674080 master-0 kubenswrapper[7599]: I0313 01:12:25.674070 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:25.674158 master-0 kubenswrapper[7599]: I0313 01:12:25.674090 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:25.674158 master-0 kubenswrapper[7599]: E0313 01:12:25.674093 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/performance-addon-operator-webhook-cert: secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:25.674158 master-0 kubenswrapper[7599]: E0313 01:12:25.674102 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:25.674158 master-0 kubenswrapper[7599]: E0313 01:12:25.674147 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:25.674267 master-0 kubenswrapper[7599]: E0313 01:12:25.674180 7599 secret.go:189] Couldn't get secret openshift-cluster-node-tuning-operator/node-tuning-operator-tls: secret "node-tuning-operator-tls" not found Mar 13 01:12:25.674267 master-0 kubenswrapper[7599]: E0313 01:12:25.674128 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674122382 +0000 UTC m=+8.945801776 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "performance-addon-operator-webhook-cert" not found Mar 13 01:12:25.674267 master-0 kubenswrapper[7599]: E0313 01:12:25.674229 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674213064 +0000 UTC m=+8.945892638 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:25.674267 master-0 kubenswrapper[7599]: E0313 01:12:25.674235 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:25.674267 master-0 kubenswrapper[7599]: E0313 01:12:25.674246 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674237454 +0000 UTC m=+8.945917058 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:25.674267 master-0 kubenswrapper[7599]: E0313 01:12:25.674253 7599 secret.go:189] Couldn't get secret openshift-image-registry/image-registry-operator-tls: secret "image-registry-operator-tls" not found Mar 13 01:12:25.674438 master-0 kubenswrapper[7599]: E0313 01:12:25.674283 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674260375 +0000 UTC m=+8.945939769 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:25.674438 master-0 kubenswrapper[7599]: E0313 01:12:25.674304 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls podName:8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674296306 +0000 UTC m=+8.945975700 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "node-tuning-operator-tls" (UniqueName: "kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls") pod "cluster-node-tuning-operator-66c7586884-wk89g" (UID: "8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7") : secret "node-tuning-operator-tls" not found Mar 13 01:12:25.674438 master-0 kubenswrapper[7599]: I0313 01:12:25.674342 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:25.674438 master-0 kubenswrapper[7599]: E0313 01:12:25.674396 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:25.674438 master-0 kubenswrapper[7599]: I0313 01:12:25.674407 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:25.674438 master-0 kubenswrapper[7599]: E0313 01:12:25.674429 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674420688 +0000 UTC m=+8.946100082 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:25.674629 master-0 kubenswrapper[7599]: I0313 01:12:25.674443 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:25.674629 master-0 kubenswrapper[7599]: E0313 01:12:25.674447 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls podName:91fc568a-61ad-400e-a54e-21d62e51bb17 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674439668 +0000 UTC m=+8.946119062 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "image-registry-operator-tls" (UniqueName: "kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls") pod "cluster-image-registry-operator-86d6d77c7c-6vvzl" (UID: "91fc568a-61ad-400e-a54e-21d62e51bb17") : secret "image-registry-operator-tls" not found Mar 13 01:12:25.674629 master-0 kubenswrapper[7599]: E0313 01:12:25.674482 7599 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:25.674629 master-0 kubenswrapper[7599]: E0313 01:12:25.674521 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert podName:2d368174-c659-444e-ba28-8fa267c0eda6 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674501 +0000 UTC m=+8.946180384 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert") pod "cluster-version-operator-745944c6b7-dqdgs" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6") : secret "cluster-version-operator-serving-cert" not found Mar 13 01:12:25.674629 master-0 kubenswrapper[7599]: E0313 01:12:25.674572 7599 secret.go:189] Couldn't get secret openshift-ingress-operator/metrics-tls: secret "metrics-tls" not found Mar 13 01:12:25.674629 master-0 kubenswrapper[7599]: E0313 01:12:25.674591 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls podName:75a53c09-210a-4346-99b0-a632b9e0a3c9 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.674585461 +0000 UTC m=+8.946264855 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls") pod "ingress-operator-677db989d6-p5c8r" (UID: "75a53c09-210a-4346-99b0-a632b9e0a3c9") : secret "metrics-tls" not found Mar 13 01:12:25.775698 master-0 kubenswrapper[7599]: I0313 01:12:25.775623 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:25.776023 master-0 kubenswrapper[7599]: I0313 01:12:25.775720 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qmr7z\" (UniqueName: \"kubernetes.io/projected/f771149b-9d62-408e-be6f-72f575b1ec42-kube-api-access-qmr7z\") pod \"migrator-57ccdf9b5-5zsh9\" (UID: \"f771149b-9d62-408e-be6f-72f575b1ec42\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" Mar 13 01:12:25.776023 master-0 kubenswrapper[7599]: I0313 01:12:25.775891 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:25.776023 master-0 kubenswrapper[7599]: E0313 01:12:25.775899 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 01:12:25.776103 master-0 kubenswrapper[7599]: E0313 01:12:25.776034 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.776000254 +0000 UTC m=+9.047679708 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : secret "metrics-daemon-secret" not found Mar 13 01:12:25.776180 master-0 kubenswrapper[7599]: E0313 01:12:25.776131 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:25.776274 master-0 kubenswrapper[7599]: E0313 01:12:25.776253 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.776222569 +0000 UTC m=+9.047901963 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:25.877667 master-0 kubenswrapper[7599]: I0313 01:12:25.877539 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmr7z\" (UniqueName: \"kubernetes.io/projected/f771149b-9d62-408e-be6f-72f575b1ec42-kube-api-access-qmr7z\") pod \"migrator-57ccdf9b5-5zsh9\" (UID: \"f771149b-9d62-408e-be6f-72f575b1ec42\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" Mar 13 01:12:25.928626 master-0 kubenswrapper[7599]: I0313 01:12:25.928553 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmr7z\" (UniqueName: \"kubernetes.io/projected/f771149b-9d62-408e-be6f-72f575b1ec42-kube-api-access-qmr7z\") pod \"migrator-57ccdf9b5-5zsh9\" (UID: \"f771149b-9d62-408e-be6f-72f575b1ec42\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" Mar 13 01:12:25.956161 master-0 kubenswrapper[7599]: I0313 01:12:25.956080 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" Mar 13 01:12:25.975129 master-0 kubenswrapper[7599]: I0313 01:12:25.975082 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld"] Mar 13 01:12:25.975974 master-0 kubenswrapper[7599]: I0313 01:12:25.975712 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" Mar 13 01:12:25.989267 master-0 kubenswrapper[7599]: I0313 01:12:25.989186 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld"] Mar 13 01:12:26.089120 master-0 kubenswrapper[7599]: I0313 01:12:26.089059 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zzqj\" (UniqueName: \"kubernetes.io/projected/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a-kube-api-access-5zzqj\") pod \"csi-snapshot-controller-7577d6f48-bj5ld\" (UID: \"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" Mar 13 01:12:26.194222 master-0 kubenswrapper[7599]: I0313 01:12:26.191047 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zzqj\" (UniqueName: \"kubernetes.io/projected/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a-kube-api-access-5zzqj\") pod \"csi-snapshot-controller-7577d6f48-bj5ld\" (UID: \"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" Mar 13 01:12:26.221437 master-0 kubenswrapper[7599]: I0313 01:12:26.215300 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zzqj\" (UniqueName: \"kubernetes.io/projected/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a-kube-api-access-5zzqj\") pod \"csi-snapshot-controller-7577d6f48-bj5ld\" (UID: \"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" Mar 13 01:12:26.249547 master-0 kubenswrapper[7599]: I0313 01:12:26.249478 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9"] Mar 13 01:12:26.329304 master-0 kubenswrapper[7599]: I0313 01:12:26.329240 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" Mar 13 01:12:26.332862 master-0 kubenswrapper[7599]: I0313 01:12:26.332757 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" event={"ID":"f771149b-9d62-408e-be6f-72f575b1ec42","Type":"ContainerStarted","Data":"cdd0c71504e94f6dcb39dab229fb181eeb5ab28f2092fb5e419d885709d3d1ae"} Mar 13 01:12:26.424288 master-0 kubenswrapper[7599]: I0313 01:12:26.424152 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2"] Mar 13 01:12:26.426202 master-0 kubenswrapper[7599]: I0313 01:12:26.424830 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.432032 master-0 kubenswrapper[7599]: I0313 01:12:26.431424 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:12:26.432032 master-0 kubenswrapper[7599]: I0313 01:12:26.431619 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:12:26.432032 master-0 kubenswrapper[7599]: I0313 01:12:26.431757 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:12:26.432032 master-0 kubenswrapper[7599]: I0313 01:12:26.431767 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:12:26.439545 master-0 kubenswrapper[7599]: I0313 01:12:26.439145 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:12:26.439545 master-0 kubenswrapper[7599]: I0313 01:12:26.439300 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:12:26.449564 master-0 kubenswrapper[7599]: I0313 01:12:26.449470 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:26.449742 master-0 kubenswrapper[7599]: I0313 01:12:26.449653 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:12:26.449742 master-0 kubenswrapper[7599]: I0313 01:12:26.449665 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:12:26.455684 master-0 kubenswrapper[7599]: I0313 01:12:26.451695 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2"] Mar 13 01:12:26.512636 master-0 kubenswrapper[7599]: I0313 01:12:26.505593 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vzbk\" (UniqueName: \"kubernetes.io/projected/c9bbf75c-46ac-4556-b7ec-811807475615-kube-api-access-8vzbk\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.512636 master-0 kubenswrapper[7599]: I0313 01:12:26.505669 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.512636 master-0 kubenswrapper[7599]: I0313 01:12:26.505748 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.512636 master-0 kubenswrapper[7599]: I0313 01:12:26.505809 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.512636 master-0 kubenswrapper[7599]: I0313 01:12:26.505873 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.530950 master-0 kubenswrapper[7599]: I0313 01:12:26.529769 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:26.607555 master-0 kubenswrapper[7599]: I0313 01:12:26.607363 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.607829 master-0 kubenswrapper[7599]: I0313 01:12:26.607503 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8vzbk\" (UniqueName: \"kubernetes.io/projected/c9bbf75c-46ac-4556-b7ec-811807475615-kube-api-access-8vzbk\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.607829 master-0 kubenswrapper[7599]: I0313 01:12:26.607729 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.607829 master-0 kubenswrapper[7599]: I0313 01:12:26.607803 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.607938 master-0 kubenswrapper[7599]: I0313 01:12:26.607888 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.608137 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.608265 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:27.108215491 +0000 UTC m=+6.379894885 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : secret "serving-cert" not found Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.610122 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: configmap "openshift-global-ca" not found Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.610211 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:27.110189653 +0000 UTC m=+6.381869047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : configmap "openshift-global-ca" not found Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.610343 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.610365 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:27.110358997 +0000 UTC m=+6.382038391 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : configmap "client-ca" not found Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.610434 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: configmap "config" not found Mar 13 01:12:26.612529 master-0 kubenswrapper[7599]: E0313 01:12:26.610477 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:27.110464989 +0000 UTC m=+6.382144383 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : configmap "config" not found Mar 13 01:12:26.628625 master-0 kubenswrapper[7599]: I0313 01:12:26.628567 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld"] Mar 13 01:12:26.640187 master-0 kubenswrapper[7599]: I0313 01:12:26.640023 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8vzbk\" (UniqueName: \"kubernetes.io/projected/c9bbf75c-46ac-4556-b7ec-811807475615-kube-api-access-8vzbk\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:26.652053 master-0 kubenswrapper[7599]: W0313 01:12:26.652018 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cc21ef9_a7c9_4154_811d_3cfff8ff3e1a.slice/crio-60de85ba97afadaf001b2cf07b2675a887f7f03299ff0b0c7cf2b1b3a76b1ac0 WatchSource:0}: Error finding container 60de85ba97afadaf001b2cf07b2675a887f7f03299ff0b0c7cf2b1b3a76b1ac0: Status 404 returned error can't find the container with id 60de85ba97afadaf001b2cf07b2675a887f7f03299ff0b0c7cf2b1b3a76b1ac0 Mar 13 01:12:26.819695 master-0 kubenswrapper[7599]: I0313 01:12:26.818338 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:26.900962 master-0 kubenswrapper[7599]: I0313 01:12:26.900875 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2"] Mar 13 01:12:26.901263 master-0 kubenswrapper[7599]: E0313 01:12:26.901216 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca config proxy-ca-bundles serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" podUID="c9bbf75c-46ac-4556-b7ec-811807475615" Mar 13 01:12:26.909478 master-0 kubenswrapper[7599]: I0313 01:12:26.909423 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9"] Mar 13 01:12:26.910119 master-0 kubenswrapper[7599]: I0313 01:12:26.910098 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:26.918193 master-0 kubenswrapper[7599]: I0313 01:12:26.915150 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 01:12:26.918193 master-0 kubenswrapper[7599]: I0313 01:12:26.915284 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 01:12:26.918193 master-0 kubenswrapper[7599]: I0313 01:12:26.915523 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 01:12:26.918193 master-0 kubenswrapper[7599]: I0313 01:12:26.915543 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 01:12:26.918193 master-0 kubenswrapper[7599]: I0313 01:12:26.915152 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 01:12:26.930136 master-0 kubenswrapper[7599]: I0313 01:12:26.930065 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9"] Mar 13 01:12:27.012117 master-0 kubenswrapper[7599]: I0313 01:12:27.012033 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.012117 master-0 kubenswrapper[7599]: I0313 01:12:27.012110 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64q2s\" (UniqueName: \"kubernetes.io/projected/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-kube-api-access-64q2s\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.012433 master-0 kubenswrapper[7599]: I0313 01:12:27.012177 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-config\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.012433 master-0 kubenswrapper[7599]: I0313 01:12:27.012212 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: I0313 01:12:27.113903 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: I0313 01:12:27.113972 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: E0313 01:12:27.114361 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: I0313 01:12:27.114362 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: E0313 01:12:27.114421 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:28.114402914 +0000 UTC m=+7.386082308 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : configmap "client-ca" not found Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: I0313 01:12:27.114440 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: I0313 01:12:27.114494 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64q2s\" (UniqueName: \"kubernetes.io/projected/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-kube-api-access-64q2s\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.114596 master-0 kubenswrapper[7599]: I0313 01:12:27.114585 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.115004 master-0 kubenswrapper[7599]: I0313 01:12:27.114631 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-config\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.115004 master-0 kubenswrapper[7599]: I0313 01:12:27.114702 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.115991 master-0 kubenswrapper[7599]: I0313 01:12:27.115915 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.118070 master-0 kubenswrapper[7599]: E0313 01:12:27.118034 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:27.118371 master-0 kubenswrapper[7599]: E0313 01:12:27.118305 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:27.118457 master-0 kubenswrapper[7599]: E0313 01:12:27.118434 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:27.118457 master-0 kubenswrapper[7599]: E0313 01:12:27.118349 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:27.618305535 +0000 UTC m=+6.889984929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : configmap "client-ca" not found Mar 13 01:12:27.118553 master-0 kubenswrapper[7599]: E0313 01:12:27.118488 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:28.11846333 +0000 UTC m=+7.390142714 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : secret "serving-cert" not found Mar 13 01:12:27.118589 master-0 kubenswrapper[7599]: E0313 01:12:27.118574 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:27.61849807 +0000 UTC m=+6.890177464 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : secret "serving-cert" not found Mar 13 01:12:27.119707 master-0 kubenswrapper[7599]: I0313 01:12:27.119662 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-config\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.124186 master-0 kubenswrapper[7599]: I0313 01:12:27.123929 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.136944 master-0 kubenswrapper[7599]: I0313 01:12:27.136898 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64q2s\" (UniqueName: \"kubernetes.io/projected/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-kube-api-access-64q2s\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.368536 master-0 kubenswrapper[7599]: I0313 01:12:27.368470 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.369216 master-0 kubenswrapper[7599]: I0313 01:12:27.368504 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerStarted","Data":"60de85ba97afadaf001b2cf07b2675a887f7f03299ff0b0c7cf2b1b3a76b1ac0"} Mar 13 01:12:27.369216 master-0 kubenswrapper[7599]: I0313 01:12:27.368701 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:12:27.376139 master-0 kubenswrapper[7599]: I0313 01:12:27.376106 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:27.395615 master-0 kubenswrapper[7599]: I0313 01:12:27.395524 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:27.401525 master-0 kubenswrapper[7599]: I0313 01:12:27.401466 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:27.519607 master-0 kubenswrapper[7599]: I0313 01:12:27.519537 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config\") pod \"c9bbf75c-46ac-4556-b7ec-811807475615\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " Mar 13 01:12:27.519805 master-0 kubenswrapper[7599]: I0313 01:12:27.519622 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8vzbk\" (UniqueName: \"kubernetes.io/projected/c9bbf75c-46ac-4556-b7ec-811807475615-kube-api-access-8vzbk\") pod \"c9bbf75c-46ac-4556-b7ec-811807475615\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " Mar 13 01:12:27.519805 master-0 kubenswrapper[7599]: I0313 01:12:27.519692 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles\") pod \"c9bbf75c-46ac-4556-b7ec-811807475615\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " Mar 13 01:12:27.520284 master-0 kubenswrapper[7599]: I0313 01:12:27.520202 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config" (OuterVolumeSpecName: "config") pod "c9bbf75c-46ac-4556-b7ec-811807475615" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:27.520913 master-0 kubenswrapper[7599]: I0313 01:12:27.520734 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c9bbf75c-46ac-4556-b7ec-811807475615" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:27.520967 master-0 kubenswrapper[7599]: I0313 01:12:27.520943 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:27.520967 master-0 kubenswrapper[7599]: I0313 01:12:27.520963 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:27.528180 master-0 kubenswrapper[7599]: I0313 01:12:27.528128 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c9bbf75c-46ac-4556-b7ec-811807475615-kube-api-access-8vzbk" (OuterVolumeSpecName: "kube-api-access-8vzbk") pod "c9bbf75c-46ac-4556-b7ec-811807475615" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615"). InnerVolumeSpecName "kube-api-access-8vzbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:12:27.622702 master-0 kubenswrapper[7599]: I0313 01:12:27.622437 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.623341 master-0 kubenswrapper[7599]: I0313 01:12:27.623308 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:27.623940 master-0 kubenswrapper[7599]: E0313 01:12:27.623429 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:27.623991 master-0 kubenswrapper[7599]: E0313 01:12:27.623958 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:27.624061 master-0 kubenswrapper[7599]: I0313 01:12:27.624008 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8vzbk\" (UniqueName: \"kubernetes.io/projected/c9bbf75c-46ac-4556-b7ec-811807475615-kube-api-access-8vzbk\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:27.624154 master-0 kubenswrapper[7599]: E0313 01:12:27.624125 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:28.62409196 +0000 UTC m=+7.895771534 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : secret "serving-cert" not found Mar 13 01:12:27.624202 master-0 kubenswrapper[7599]: E0313 01:12:27.624164 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:28.624155522 +0000 UTC m=+7.895835146 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : configmap "client-ca" not found Mar 13 01:12:27.756604 master-0 kubenswrapper[7599]: I0313 01:12:27.756454 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:27.762191 master-0 kubenswrapper[7599]: I0313 01:12:27.762132 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:12:28.131249 master-0 kubenswrapper[7599]: I0313 01:12:28.131184 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:28.131453 master-0 kubenswrapper[7599]: I0313 01:12:28.131294 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert\") pod \"controller-manager-6f7fd6c796-xzbb2\" (UID: \"c9bbf75c-46ac-4556-b7ec-811807475615\") " pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:28.131574 master-0 kubenswrapper[7599]: E0313 01:12:28.131524 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:28.131675 master-0 kubenswrapper[7599]: E0313 01:12:28.131611 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:30.13158482 +0000 UTC m=+9.403264214 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : secret "serving-cert" not found Mar 13 01:12:28.132124 master-0 kubenswrapper[7599]: E0313 01:12:28.132101 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:28.132180 master-0 kubenswrapper[7599]: E0313 01:12:28.132143 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca podName:c9bbf75c-46ac-4556-b7ec-811807475615 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:30.132132752 +0000 UTC m=+9.403812156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca") pod "controller-manager-6f7fd6c796-xzbb2" (UID: "c9bbf75c-46ac-4556-b7ec-811807475615") : configmap "client-ca" not found Mar 13 01:12:28.372459 master-0 kubenswrapper[7599]: I0313 01:12:28.372383 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2" Mar 13 01:12:28.420447 master-0 kubenswrapper[7599]: I0313 01:12:28.420383 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7cc8885c57-sznqx"] Mar 13 01:12:28.421290 master-0 kubenswrapper[7599]: I0313 01:12:28.421165 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.424866 master-0 kubenswrapper[7599]: I0313 01:12:28.424811 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:12:28.425197 master-0 kubenswrapper[7599]: I0313 01:12:28.425024 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:12:28.425197 master-0 kubenswrapper[7599]: I0313 01:12:28.425186 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:12:28.429529 master-0 kubenswrapper[7599]: I0313 01:12:28.426173 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:12:28.429529 master-0 kubenswrapper[7599]: I0313 01:12:28.426621 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:12:28.438544 master-0 kubenswrapper[7599]: I0313 01:12:28.438479 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:28.438982 master-0 kubenswrapper[7599]: I0313 01:12:28.438964 7599 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:12:28.439907 master-0 kubenswrapper[7599]: I0313 01:12:28.439827 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2"] Mar 13 01:12:28.449841 master-0 kubenswrapper[7599]: I0313 01:12:28.441816 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6f7fd6c796-xzbb2"] Mar 13 01:12:28.449841 master-0 kubenswrapper[7599]: I0313 01:12:28.442344 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7cc8885c57-sznqx"] Mar 13 01:12:28.449841 master-0 kubenswrapper[7599]: I0313 01:12:28.449622 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:12:28.540002 master-0 kubenswrapper[7599]: I0313 01:12:28.539945 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.540602 master-0 kubenswrapper[7599]: I0313 01:12:28.540544 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mxp6\" (UniqueName: \"kubernetes.io/projected/5a124fde-6ed7-4846-8be7-9665ce7229d8-kube-api-access-8mxp6\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.540690 master-0 kubenswrapper[7599]: I0313 01:12:28.540662 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-proxy-ca-bundles\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.540734 master-0 kubenswrapper[7599]: I0313 01:12:28.540702 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.540829 master-0 kubenswrapper[7599]: I0313 01:12:28.540804 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-config\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.541026 master-0 kubenswrapper[7599]: I0313 01:12:28.540959 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c9bbf75c-46ac-4556-b7ec-811807475615-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:28.541026 master-0 kubenswrapper[7599]: I0313 01:12:28.540986 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c9bbf75c-46ac-4556-b7ec-811807475615-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.642378 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.642465 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.642499 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mxp6\" (UniqueName: \"kubernetes.io/projected/5a124fde-6ed7-4846-8be7-9665ce7229d8-kube-api-access-8mxp6\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.642558 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-proxy-ca-bundles\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.642587 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.642646 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.642671 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-config\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: I0313 01:12:28.644248 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-config\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: E0313 01:12:28.644690 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:28.644970 master-0 kubenswrapper[7599]: E0313 01:12:28.644848 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:28.647304 master-0 kubenswrapper[7599]: E0313 01:12:28.645007 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:28.647304 master-0 kubenswrapper[7599]: E0313 01:12:28.645192 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:28.647304 master-0 kubenswrapper[7599]: I0313 01:12:28.645854 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-proxy-ca-bundles\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:28.647502 master-0 kubenswrapper[7599]: E0313 01:12:28.644736 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca podName:5a124fde-6ed7-4846-8be7-9665ce7229d8 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.144721329 +0000 UTC m=+8.416400723 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca") pod "controller-manager-7cc8885c57-sznqx" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8") : configmap "client-ca" not found Mar 13 01:12:28.647502 master-0 kubenswrapper[7599]: E0313 01:12:28.647451 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert podName:5a124fde-6ed7-4846-8be7-9665ce7229d8 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:29.147437176 +0000 UTC m=+8.419116570 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert") pod "controller-manager-7cc8885c57-sznqx" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8") : secret "serving-cert" not found Mar 13 01:12:28.647502 master-0 kubenswrapper[7599]: E0313 01:12:28.647470 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:30.647460257 +0000 UTC m=+9.919139891 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : configmap "client-ca" not found Mar 13 01:12:28.647502 master-0 kubenswrapper[7599]: E0313 01:12:28.647487 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:30.647478027 +0000 UTC m=+9.919157671 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : secret "serving-cert" not found Mar 13 01:12:29.002568 master-0 kubenswrapper[7599]: I0313 01:12:29.001459 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c9bbf75c-46ac-4556-b7ec-811807475615" path="/var/lib/kubelet/pods/c9bbf75c-46ac-4556-b7ec-811807475615/volumes" Mar 13 01:12:29.002568 master-0 kubenswrapper[7599]: I0313 01:12:29.002040 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:12:29.029439 master-0 kubenswrapper[7599]: I0313 01:12:29.029376 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mxp6\" (UniqueName: \"kubernetes.io/projected/5a124fde-6ed7-4846-8be7-9665ce7229d8-kube-api-access-8mxp6\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:29.100649 master-0 kubenswrapper[7599]: I0313 01:12:29.100573 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc8885c57-sznqx"] Mar 13 01:12:29.103627 master-0 kubenswrapper[7599]: E0313 01:12:29.101209 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" podUID="5a124fde-6ed7-4846-8be7-9665ce7229d8" Mar 13 01:12:29.179211 master-0 kubenswrapper[7599]: I0313 01:12:29.179001 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:29.179211 master-0 kubenswrapper[7599]: E0313 01:12:29.179176 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:29.179575 master-0 kubenswrapper[7599]: E0313 01:12:29.179284 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca podName:5a124fde-6ed7-4846-8be7-9665ce7229d8 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:30.179258827 +0000 UTC m=+9.450938221 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca") pod "controller-manager-7cc8885c57-sznqx" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8") : configmap "client-ca" not found Mar 13 01:12:29.179906 master-0 kubenswrapper[7599]: I0313 01:12:29.179858 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:29.180122 master-0 kubenswrapper[7599]: E0313 01:12:29.180089 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:29.180122 master-0 kubenswrapper[7599]: E0313 01:12:29.180122 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert podName:5a124fde-6ed7-4846-8be7-9665ce7229d8 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:30.180115346 +0000 UTC m=+9.451794740 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert") pod "controller-manager-7cc8885c57-sznqx" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8") : secret "serving-cert" not found Mar 13 01:12:29.377780 master-0 kubenswrapper[7599]: I0313 01:12:29.377612 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:29.386835 master-0 kubenswrapper[7599]: I0313 01:12:29.386795 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:29.483839 master-0 kubenswrapper[7599]: I0313 01:12:29.483761 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mxp6\" (UniqueName: \"kubernetes.io/projected/5a124fde-6ed7-4846-8be7-9665ce7229d8-kube-api-access-8mxp6\") pod \"5a124fde-6ed7-4846-8be7-9665ce7229d8\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " Mar 13 01:12:29.483839 master-0 kubenswrapper[7599]: I0313 01:12:29.483836 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-proxy-ca-bundles\") pod \"5a124fde-6ed7-4846-8be7-9665ce7229d8\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " Mar 13 01:12:29.484221 master-0 kubenswrapper[7599]: I0313 01:12:29.483884 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-config\") pod \"5a124fde-6ed7-4846-8be7-9665ce7229d8\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " Mar 13 01:12:29.486316 master-0 kubenswrapper[7599]: I0313 01:12:29.486271 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-config" (OuterVolumeSpecName: "config") pod "5a124fde-6ed7-4846-8be7-9665ce7229d8" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:29.489225 master-0 kubenswrapper[7599]: I0313 01:12:29.486392 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "5a124fde-6ed7-4846-8be7-9665ce7229d8" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:29.492080 master-0 kubenswrapper[7599]: I0313 01:12:29.489942 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5a124fde-6ed7-4846-8be7-9665ce7229d8-kube-api-access-8mxp6" (OuterVolumeSpecName: "kube-api-access-8mxp6") pod "5a124fde-6ed7-4846-8be7-9665ce7229d8" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8"). InnerVolumeSpecName "kube-api-access-8mxp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:12:29.585492 master-0 kubenswrapper[7599]: I0313 01:12:29.585427 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mxp6\" (UniqueName: \"kubernetes.io/projected/5a124fde-6ed7-4846-8be7-9665ce7229d8-kube-api-access-8mxp6\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:29.585492 master-0 kubenswrapper[7599]: I0313 01:12:29.585468 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:29.585492 master-0 kubenswrapper[7599]: I0313 01:12:29.585480 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.687998 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688048 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688087 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688112 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688132 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688155 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688200 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688236 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688274 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688297 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: I0313 01:12:29.688315 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: E0313 01:12:29.688435 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/olm-operator-serving-cert: secret "olm-operator-serving-cert" not found Mar 13 01:12:29.691059 master-0 kubenswrapper[7599]: E0313 01:12:29.688488 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert podName:31f19d97-50f9-4486-a8f9-df61ef2b0528 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.688472864 +0000 UTC m=+16.960152248 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert") pod "olm-operator-d64cfc9db-r4gzg" (UID: "31f19d97-50f9-4486-a8f9-df61ef2b0528") : secret "olm-operator-serving-cert" not found Mar 13 01:12:29.692339 master-0 kubenswrapper[7599]: E0313 01:12:29.691533 7599 secret.go:189] Couldn't get secret openshift-marketplace/marketplace-operator-metrics: secret "marketplace-operator-metrics" not found Mar 13 01:12:29.692339 master-0 kubenswrapper[7599]: E0313 01:12:29.691646 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics podName:8ad2a6d5-6edf-4840-89f9-47847c8dac05 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.69161781 +0000 UTC m=+16.963297394 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "marketplace-operator-metrics" (UniqueName: "kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics") pod "marketplace-operator-64bf9778cb-bx29h" (UID: "8ad2a6d5-6edf-4840-89f9-47847c8dac05") : secret "marketplace-operator-metrics" not found Mar 13 01:12:29.692339 master-0 kubenswrapper[7599]: E0313 01:12:29.691709 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/catalog-operator-serving-cert: secret "catalog-operator-serving-cert" not found Mar 13 01:12:29.692339 master-0 kubenswrapper[7599]: E0313 01:12:29.691736 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert podName:6ad2904e-ece9-4d72-8683-c3e691e07497 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.691725512 +0000 UTC m=+16.963405116 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "srv-cert" (UniqueName: "kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert") pod "catalog-operator-7d9c49f57b-4jttq" (UID: "6ad2904e-ece9-4d72-8683-c3e691e07497") : secret "catalog-operator-serving-cert" not found Mar 13 01:12:29.692339 master-0 kubenswrapper[7599]: E0313 01:12:29.692265 7599 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/package-server-manager-serving-cert: secret "package-server-manager-serving-cert" not found Mar 13 01:12:29.692339 master-0 kubenswrapper[7599]: E0313 01:12:29.692305 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert podName:53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.692294564 +0000 UTC m=+16.963973958 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "package-server-manager-serving-cert" (UniqueName: "kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert") pod "package-server-manager-854648ff6d-pj26h" (UID: "53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59") : secret "package-server-manager-serving-cert" not found Mar 13 01:12:29.692702 master-0 kubenswrapper[7599]: E0313 01:12:29.692376 7599 secret.go:189] Couldn't get secret openshift-monitoring/cluster-monitoring-operator-tls: secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:29.692702 master-0 kubenswrapper[7599]: E0313 01:12:29.692416 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls podName:46015913-c499-49b1-a9f6-a61c6e96b13f nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.692406717 +0000 UTC m=+16.964086111 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cluster-monitoring-operator-tls" (UniqueName: "kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls") pod "cluster-monitoring-operator-674cbfbd9d-75jj7" (UID: "46015913-c499-49b1-a9f6-a61c6e96b13f") : secret "cluster-monitoring-operator-tls" not found Mar 13 01:12:29.696741 master-0 kubenswrapper[7599]: I0313 01:12:29.696584 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:29.698065 master-0 kubenswrapper[7599]: I0313 01:12:29.696976 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:29.699746 master-0 kubenswrapper[7599]: I0313 01:12:29.699719 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:29.700026 master-0 kubenswrapper[7599]: I0313 01:12:29.699957 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:29.701440 master-0 kubenswrapper[7599]: I0313 01:12:29.701403 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"cluster-version-operator-745944c6b7-dqdgs\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:29.702260 master-0 kubenswrapper[7599]: I0313 01:12:29.702080 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:29.721722 master-0 kubenswrapper[7599]: I0313 01:12:29.721096 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:12:29.721722 master-0 kubenswrapper[7599]: I0313 01:12:29.721617 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:12:29.724294 master-0 kubenswrapper[7599]: I0313 01:12:29.722988 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:12:29.724697 master-0 kubenswrapper[7599]: I0313 01:12:29.724403 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:12:29.744080 master-0 kubenswrapper[7599]: I0313 01:12:29.744048 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:12:29.789580 master-0 kubenswrapper[7599]: I0313 01:12:29.789522 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:29.789779 master-0 kubenswrapper[7599]: I0313 01:12:29.789620 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:29.789830 master-0 kubenswrapper[7599]: E0313 01:12:29.789768 7599 secret.go:189] Couldn't get secret openshift-multus/multus-admission-controller-secret: secret "multus-admission-controller-secret" not found Mar 13 01:12:29.789874 master-0 kubenswrapper[7599]: E0313 01:12:29.789865 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs podName:161d2fa6-a541-427a-a3e9-3297102a26f5 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.789835484 +0000 UTC m=+17.061515038 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "webhook-certs" (UniqueName: "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs") pod "multus-admission-controller-8d675b596-ddtwn" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5") : secret "multus-admission-controller-secret" not found Mar 13 01:12:29.789930 master-0 kubenswrapper[7599]: E0313 01:12:29.789902 7599 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: secret "metrics-daemon-secret" not found Mar 13 01:12:29.789982 master-0 kubenswrapper[7599]: E0313 01:12:29.789972 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs podName:9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.789947918 +0000 UTC m=+17.061627502 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs") pod "network-metrics-daemon-9hwz9" (UID: "9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d") : secret "metrics-daemon-secret" not found Mar 13 01:12:30.196104 master-0 kubenswrapper[7599]: I0313 01:12:30.195327 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:30.196104 master-0 kubenswrapper[7599]: I0313 01:12:30.195834 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca\") pod \"controller-manager-7cc8885c57-sznqx\" (UID: \"5a124fde-6ed7-4846-8be7-9665ce7229d8\") " pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:30.196104 master-0 kubenswrapper[7599]: E0313 01:12:30.195584 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:30.196104 master-0 kubenswrapper[7599]: E0313 01:12:30.195986 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:30.196104 master-0 kubenswrapper[7599]: E0313 01:12:30.196004 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert podName:5a124fde-6ed7-4846-8be7-9665ce7229d8 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:32.195968354 +0000 UTC m=+11.467647788 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert") pod "controller-manager-7cc8885c57-sznqx" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8") : secret "serving-cert" not found Mar 13 01:12:30.196104 master-0 kubenswrapper[7599]: E0313 01:12:30.196053 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca podName:5a124fde-6ed7-4846-8be7-9665ce7229d8 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:32.196033716 +0000 UTC m=+11.467713110 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca") pod "controller-manager-7cc8885c57-sznqx" (UID: "5a124fde-6ed7-4846-8be7-9665ce7229d8") : configmap "client-ca" not found Mar 13 01:12:30.380639 master-0 kubenswrapper[7599]: I0313 01:12:30.380555 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7cc8885c57-sznqx" Mar 13 01:12:30.433332 master-0 kubenswrapper[7599]: I0313 01:12:30.433255 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-575fbbbb98-mc6fk"] Mar 13 01:12:30.434120 master-0 kubenswrapper[7599]: I0313 01:12:30.434086 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.438164 master-0 kubenswrapper[7599]: I0313 01:12:30.434490 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7cc8885c57-sznqx"] Mar 13 01:12:30.446762 master-0 kubenswrapper[7599]: I0313 01:12:30.444473 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:12:30.446762 master-0 kubenswrapper[7599]: I0313 01:12:30.444880 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:12:30.448615 master-0 kubenswrapper[7599]: I0313 01:12:30.447623 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:12:30.450473 master-0 kubenswrapper[7599]: I0313 01:12:30.450430 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-575fbbbb98-mc6fk"] Mar 13 01:12:30.450473 master-0 kubenswrapper[7599]: I0313 01:12:30.450473 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7cc8885c57-sznqx"] Mar 13 01:12:30.450719 master-0 kubenswrapper[7599]: I0313 01:12:30.450678 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:12:30.450804 master-0 kubenswrapper[7599]: I0313 01:12:30.450767 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:12:30.450910 master-0 kubenswrapper[7599]: I0313 01:12:30.450883 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:12:30.608448 master-0 kubenswrapper[7599]: I0313 01:12:30.608392 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.608448 master-0 kubenswrapper[7599]: I0313 01:12:30.608452 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-proxy-ca-bundles\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.608722 master-0 kubenswrapper[7599]: I0313 01:12:30.608571 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pws4d\" (UniqueName: \"kubernetes.io/projected/ab4408ab-5c90-46c2-9483-27974e568361-kube-api-access-pws4d\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.608722 master-0 kubenswrapper[7599]: I0313 01:12:30.608663 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-config\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.608839 master-0 kubenswrapper[7599]: I0313 01:12:30.608776 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.608889 master-0 kubenswrapper[7599]: I0313 01:12:30.608860 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a124fde-6ed7-4846-8be7-9665ce7229d8-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:30.608889 master-0 kubenswrapper[7599]: I0313 01:12:30.608873 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5a124fde-6ed7-4846-8be7-9665ce7229d8-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:30.709694 master-0 kubenswrapper[7599]: I0313 01:12:30.709565 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-proxy-ca-bundles\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.709694 master-0 kubenswrapper[7599]: I0313 01:12:30.709639 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:30.709940 master-0 kubenswrapper[7599]: I0313 01:12:30.709856 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pws4d\" (UniqueName: \"kubernetes.io/projected/ab4408ab-5c90-46c2-9483-27974e568361-kube-api-access-pws4d\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.710056 master-0 kubenswrapper[7599]: I0313 01:12:30.709936 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-config\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.710056 master-0 kubenswrapper[7599]: E0313 01:12:30.709965 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:30.710056 master-0 kubenswrapper[7599]: I0313 01:12:30.710006 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.710056 master-0 kubenswrapper[7599]: E0313 01:12:30.710045 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:34.710022288 +0000 UTC m=+13.981701882 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : configmap "client-ca" not found Mar 13 01:12:30.710224 master-0 kubenswrapper[7599]: E0313 01:12:30.710077 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:30.710224 master-0 kubenswrapper[7599]: I0313 01:12:30.710106 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:30.710224 master-0 kubenswrapper[7599]: E0313 01:12:30.710139 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:31.210121301 +0000 UTC m=+10.481800695 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : configmap "client-ca" not found Mar 13 01:12:30.710224 master-0 kubenswrapper[7599]: I0313 01:12:30.710157 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.710224 master-0 kubenswrapper[7599]: E0313 01:12:30.710198 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:30.710224 master-0 kubenswrapper[7599]: E0313 01:12:30.710230 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:34.710223554 +0000 UTC m=+13.981902948 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : secret "serving-cert" not found Mar 13 01:12:30.710477 master-0 kubenswrapper[7599]: E0313 01:12:30.710442 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:30.710569 master-0 kubenswrapper[7599]: E0313 01:12:30.710483 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:31.210473362 +0000 UTC m=+10.482152756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : secret "serving-cert" not found Mar 13 01:12:30.711449 master-0 kubenswrapper[7599]: I0313 01:12:30.711412 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-config\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.711620 master-0 kubenswrapper[7599]: I0313 01:12:30.711596 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-proxy-ca-bundles\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:30.728217 master-0 kubenswrapper[7599]: I0313 01:12:30.728174 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pws4d\" (UniqueName: \"kubernetes.io/projected/ab4408ab-5c90-46c2-9483-27974e568361-kube-api-access-pws4d\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:31.010625 master-0 kubenswrapper[7599]: I0313 01:12:31.009258 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5a124fde-6ed7-4846-8be7-9665ce7229d8" path="/var/lib/kubelet/pods/5a124fde-6ed7-4846-8be7-9665ce7229d8/volumes" Mar 13 01:12:31.123988 master-0 kubenswrapper[7599]: I0313 01:12:31.123205 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl"] Mar 13 01:12:31.143767 master-0 kubenswrapper[7599]: I0313 01:12:31.143695 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-677db989d6-p5c8r"] Mar 13 01:12:31.163829 master-0 kubenswrapper[7599]: W0313 01:12:31.160742 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod75a53c09_210a_4346_99b0_a632b9e0a3c9.slice/crio-e4bd5af8e1a96f925e1b64f7902f036f84c366c1ef01152f845644c1aa6a1b22 WatchSource:0}: Error finding container e4bd5af8e1a96f925e1b64f7902f036f84c366c1ef01152f845644c1aa6a1b22: Status 404 returned error can't find the container with id e4bd5af8e1a96f925e1b64f7902f036f84c366c1ef01152f845644c1aa6a1b22 Mar 13 01:12:31.173451 master-0 kubenswrapper[7599]: I0313 01:12:31.173397 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g"] Mar 13 01:12:31.228333 master-0 kubenswrapper[7599]: I0313 01:12:31.225202 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:31.228333 master-0 kubenswrapper[7599]: I0313 01:12:31.225320 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:31.228333 master-0 kubenswrapper[7599]: E0313 01:12:31.225481 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:31.228333 master-0 kubenswrapper[7599]: E0313 01:12:31.225610 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:32.22558896 +0000 UTC m=+11.497268354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : configmap "client-ca" not found Mar 13 01:12:31.228333 master-0 kubenswrapper[7599]: E0313 01:12:31.225701 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:31.228333 master-0 kubenswrapper[7599]: E0313 01:12:31.225724 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:32.225718214 +0000 UTC m=+11.497397608 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : secret "serving-cert" not found Mar 13 01:12:31.386536 master-0 kubenswrapper[7599]: I0313 01:12:31.386479 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-589895fbb7-wb6qq"] Mar 13 01:12:31.392009 master-0 kubenswrapper[7599]: I0313 01:12:31.391670 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" event={"ID":"75a53c09-210a-4346-99b0-a632b9e0a3c9","Type":"ContainerStarted","Data":"e4bd5af8e1a96f925e1b64f7902f036f84c366c1ef01152f845644c1aa6a1b22"} Mar 13 01:12:31.394189 master-0 kubenswrapper[7599]: I0313 01:12:31.394143 7599 generic.go:334] "Generic (PLEG): container finished" podID="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" containerID="3b5d590cab289e687af0089813cf69faee5c388307bbafba8b29486da0d45d2a" exitCode=0 Mar 13 01:12:31.394258 master-0 kubenswrapper[7599]: I0313 01:12:31.394222 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" event={"ID":"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a","Type":"ContainerDied","Data":"3b5d590cab289e687af0089813cf69faee5c388307bbafba8b29486da0d45d2a"} Mar 13 01:12:31.396331 master-0 kubenswrapper[7599]: I0313 01:12:31.396283 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" event={"ID":"2d368174-c659-444e-ba28-8fa267c0eda6","Type":"ContainerStarted","Data":"b54155a5db31eb0df3f308a670d9f6fabe70860c769343bf09370d04c49698f7"} Mar 13 01:12:31.402529 master-0 kubenswrapper[7599]: W0313 01:12:31.398799 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d874a21_43aa_4d81_b904_853fb3da5a94.slice/crio-3be9d647691aac847285be1df15dfc7365f7b948dc0fd04d51bc4a610b82da33 WatchSource:0}: Error finding container 3be9d647691aac847285be1df15dfc7365f7b948dc0fd04d51bc4a610b82da33: Status 404 returned error can't find the container with id 3be9d647691aac847285be1df15dfc7365f7b948dc0fd04d51bc4a610b82da33 Mar 13 01:12:31.402529 master-0 kubenswrapper[7599]: I0313 01:12:31.399926 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerStarted","Data":"544c375d0985569800e6f6387597c6bbdd7b9967f0bc5e80927a60f7a9628d80"} Mar 13 01:12:31.402529 master-0 kubenswrapper[7599]: I0313 01:12:31.400383 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:31.402529 master-0 kubenswrapper[7599]: I0313 01:12:31.401761 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" event={"ID":"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7","Type":"ContainerStarted","Data":"b055cbc200ec047aacb638d82e675e244c203df858dcd01394edc1e4bc014d9f"} Mar 13 01:12:31.404242 master-0 kubenswrapper[7599]: I0313 01:12:31.404209 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" event={"ID":"91fc568a-61ad-400e-a54e-21d62e51bb17","Type":"ContainerStarted","Data":"5cba1e5f698e98df3c15a1fd7c6d0586c623f3939d642ba858d361854e19b48c"} Mar 13 01:12:31.411729 master-0 kubenswrapper[7599]: I0313 01:12:31.411669 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" event={"ID":"f771149b-9d62-408e-be6f-72f575b1ec42","Type":"ContainerStarted","Data":"1120ea0925f41e299ab63750b1fba7b5a9635d8fc9eb2cc2b78a1e7ad9b55397"} Mar 13 01:12:31.411729 master-0 kubenswrapper[7599]: I0313 01:12:31.411712 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" event={"ID":"f771149b-9d62-408e-be6f-72f575b1ec42","Type":"ContainerStarted","Data":"a0834f83f02dbc068db339c4d68423af7ea4a3c5dd8191580f3f372ded166dfc"} Mar 13 01:12:31.418621 master-0 kubenswrapper[7599]: I0313 01:12:31.414036 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerStarted","Data":"743c555e1cf0c98c73695ed678affcb2226d9582a12dd77e2de535512f78c66d"} Mar 13 01:12:31.442280 master-0 kubenswrapper[7599]: I0313 01:12:31.442198 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" podStartSLOduration=1.8386234909999999 podStartE2EDuration="6.442174154s" podCreationTimestamp="2026-03-13 01:12:25 +0000 UTC" firstStartedPulling="2026-03-13 01:12:26.268493138 +0000 UTC m=+5.540172532" lastFinishedPulling="2026-03-13 01:12:30.872043801 +0000 UTC m=+10.143723195" observedRunningTime="2026-03-13 01:12:31.441737339 +0000 UTC m=+10.713416753" watchObservedRunningTime="2026-03-13 01:12:31.442174154 +0000 UTC m=+10.713853548" Mar 13 01:12:31.475335 master-0 kubenswrapper[7599]: I0313 01:12:31.470641 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" podStartSLOduration=2.261474067 podStartE2EDuration="6.470156056s" podCreationTimestamp="2026-03-13 01:12:25 +0000 UTC" firstStartedPulling="2026-03-13 01:12:26.670552662 +0000 UTC m=+5.942232056" lastFinishedPulling="2026-03-13 01:12:30.879234651 +0000 UTC m=+10.150914045" observedRunningTime="2026-03-13 01:12:31.469002699 +0000 UTC m=+10.740682123" watchObservedRunningTime="2026-03-13 01:12:31.470156056 +0000 UTC m=+10.741835460" Mar 13 01:12:32.264804 master-0 kubenswrapper[7599]: I0313 01:12:32.264678 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:32.265035 master-0 kubenswrapper[7599]: I0313 01:12:32.264819 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:32.265035 master-0 kubenswrapper[7599]: E0313 01:12:32.264956 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:32.265035 master-0 kubenswrapper[7599]: E0313 01:12:32.264993 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:32.265161 master-0 kubenswrapper[7599]: E0313 01:12:32.265053 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:34.26503037 +0000 UTC m=+13.536709764 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : secret "serving-cert" not found Mar 13 01:12:32.265161 master-0 kubenswrapper[7599]: E0313 01:12:32.265072 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:34.265064461 +0000 UTC m=+13.536743855 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : configmap "client-ca" not found Mar 13 01:12:32.418572 master-0 kubenswrapper[7599]: I0313 01:12:32.418495 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" event={"ID":"7d874a21-43aa-4d81-b904-853fb3da5a94","Type":"ContainerStarted","Data":"3be9d647691aac847285be1df15dfc7365f7b948dc0fd04d51bc4a610b82da33"} Mar 13 01:12:32.434090 master-0 kubenswrapper[7599]: I0313 01:12:32.419738 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/0.log" Mar 13 01:12:32.434090 master-0 kubenswrapper[7599]: I0313 01:12:32.419996 7599 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="544c375d0985569800e6f6387597c6bbdd7b9967f0bc5e80927a60f7a9628d80" exitCode=255 Mar 13 01:12:32.434090 master-0 kubenswrapper[7599]: I0313 01:12:32.420767 7599 scope.go:117] "RemoveContainer" containerID="544c375d0985569800e6f6387597c6bbdd7b9967f0bc5e80927a60f7a9628d80" Mar 13 01:12:32.434090 master-0 kubenswrapper[7599]: I0313 01:12:32.421031 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerDied","Data":"544c375d0985569800e6f6387597c6bbdd7b9967f0bc5e80927a60f7a9628d80"} Mar 13 01:12:32.455918 master-0 kubenswrapper[7599]: I0313 01:12:32.455872 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:33.428184 master-0 kubenswrapper[7599]: I0313 01:12:33.427884 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/0.log" Mar 13 01:12:33.432467 master-0 kubenswrapper[7599]: I0313 01:12:33.431500 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerStarted","Data":"c6f2c7ce1ebd48d89e8b89aa6f0c61474cf42c8cd887993b37c623a2d414e5fb"} Mar 13 01:12:33.432467 master-0 kubenswrapper[7599]: I0313 01:12:33.432300 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:34.290620 master-0 kubenswrapper[7599]: I0313 01:12:34.290541 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:34.290988 master-0 kubenswrapper[7599]: E0313 01:12:34.290755 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:34.290988 master-0 kubenswrapper[7599]: E0313 01:12:34.290874 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:38.290847209 +0000 UTC m=+17.562526713 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : secret "serving-cert" not found Mar 13 01:12:34.291333 master-0 kubenswrapper[7599]: I0313 01:12:34.291022 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:34.291333 master-0 kubenswrapper[7599]: E0313 01:12:34.291216 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:34.291333 master-0 kubenswrapper[7599]: E0313 01:12:34.291306 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:38.291281162 +0000 UTC m=+17.562960556 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : configmap "client-ca" not found Mar 13 01:12:34.797102 master-0 kubenswrapper[7599]: I0313 01:12:34.797023 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:34.797828 master-0 kubenswrapper[7599]: E0313 01:12:34.797180 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:34.797828 master-0 kubenswrapper[7599]: I0313 01:12:34.797253 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:34.797828 master-0 kubenswrapper[7599]: E0313 01:12:34.797284 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:42.79725809 +0000 UTC m=+22.068937484 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : configmap "client-ca" not found Mar 13 01:12:34.797828 master-0 kubenswrapper[7599]: E0313 01:12:34.797431 7599 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:34.797828 master-0 kubenswrapper[7599]: E0313 01:12:34.797555 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:42.797532298 +0000 UTC m=+22.069211692 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : secret "serving-cert" not found Mar 13 01:12:35.348798 master-0 kubenswrapper[7599]: I0313 01:12:35.346334 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-86544774bc-rdwxl"] Mar 13 01:12:35.348798 master-0 kubenswrapper[7599]: I0313 01:12:35.347124 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.352369 master-0 kubenswrapper[7599]: I0313 01:12:35.351369 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-0" Mar 13 01:12:35.352369 master-0 kubenswrapper[7599]: I0313 01:12:35.351494 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-0" Mar 13 01:12:35.352369 master-0 kubenswrapper[7599]: I0313 01:12:35.351775 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 01:12:35.352369 master-0 kubenswrapper[7599]: I0313 01:12:35.351926 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 01:12:35.352369 master-0 kubenswrapper[7599]: I0313 01:12:35.351983 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 01:12:35.352369 master-0 kubenswrapper[7599]: I0313 01:12:35.351944 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 01:12:35.352369 master-0 kubenswrapper[7599]: I0313 01:12:35.352136 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 01:12:35.356584 master-0 kubenswrapper[7599]: I0313 01:12:35.355126 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 01:12:35.356584 master-0 kubenswrapper[7599]: I0313 01:12:35.355648 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 01:12:35.358008 master-0 kubenswrapper[7599]: I0313 01:12:35.357881 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-86544774bc-rdwxl"] Mar 13 01:12:35.358853 master-0 kubenswrapper[7599]: I0313 01:12:35.358804 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 01:12:35.453384 master-0 kubenswrapper[7599]: I0313 01:12:35.453321 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:12:35.507015 master-0 kubenswrapper[7599]: I0313 01:12:35.506948 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit-dir\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507015 master-0 kubenswrapper[7599]: I0313 01:12:35.507009 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-config\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507015 master-0 kubenswrapper[7599]: I0313 01:12:35.507041 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-encryption-config\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507494 master-0 kubenswrapper[7599]: I0313 01:12:35.507071 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-image-import-ca\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507494 master-0 kubenswrapper[7599]: I0313 01:12:35.507116 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507494 master-0 kubenswrapper[7599]: I0313 01:12:35.507226 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-trusted-ca-bundle\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507494 master-0 kubenswrapper[7599]: I0313 01:12:35.507348 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-serving-ca\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507494 master-0 kubenswrapper[7599]: I0313 01:12:35.507399 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507494 master-0 kubenswrapper[7599]: I0313 01:12:35.507422 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-client\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507494 master-0 kubenswrapper[7599]: I0313 01:12:35.507440 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q4gk4\" (UniqueName: \"kubernetes.io/projected/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-kube-api-access-q4gk4\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.507718 master-0 kubenswrapper[7599]: I0313 01:12:35.507564 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-node-pullsecrets\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.608960 master-0 kubenswrapper[7599]: I0313 01:12:35.608746 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit-dir\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.608960 master-0 kubenswrapper[7599]: I0313 01:12:35.608912 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit-dir\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.609299 master-0 kubenswrapper[7599]: I0313 01:12:35.609023 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-config\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.609299 master-0 kubenswrapper[7599]: I0313 01:12:35.609070 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-encryption-config\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.609299 master-0 kubenswrapper[7599]: I0313 01:12:35.609112 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-image-import-ca\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.609299 master-0 kubenswrapper[7599]: I0313 01:12:35.609140 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.609299 master-0 kubenswrapper[7599]: I0313 01:12:35.609169 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-trusted-ca-bundle\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.609707 master-0 kubenswrapper[7599]: I0313 01:12:35.609673 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-serving-ca\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.610838 master-0 kubenswrapper[7599]: E0313 01:12:35.610780 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 01:12:35.611076 master-0 kubenswrapper[7599]: E0313 01:12:35.611041 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:36.111008676 +0000 UTC m=+15.382688070 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : secret "serving-cert" not found Mar 13 01:12:35.612058 master-0 kubenswrapper[7599]: I0313 01:12:35.611970 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-image-import-ca\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.612195 master-0 kubenswrapper[7599]: I0313 01:12:35.612089 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.612195 master-0 kubenswrapper[7599]: I0313 01:12:35.612115 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-client\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.612195 master-0 kubenswrapper[7599]: I0313 01:12:35.612137 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q4gk4\" (UniqueName: \"kubernetes.io/projected/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-kube-api-access-q4gk4\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.612195 master-0 kubenswrapper[7599]: I0313 01:12:35.612112 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-config\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.612393 master-0 kubenswrapper[7599]: E0313 01:12:35.612227 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 01:12:35.612393 master-0 kubenswrapper[7599]: I0313 01:12:35.612271 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-node-pullsecrets\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.612393 master-0 kubenswrapper[7599]: E0313 01:12:35.612293 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:36.112273287 +0000 UTC m=+15.383952701 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : configmap "audit-0" not found Mar 13 01:12:35.612393 master-0 kubenswrapper[7599]: I0313 01:12:35.612368 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-node-pullsecrets\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.612393 master-0 kubenswrapper[7599]: I0313 01:12:35.612378 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-trusted-ca-bundle\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.613018 master-0 kubenswrapper[7599]: I0313 01:12:35.612972 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-serving-ca\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.618932 master-0 kubenswrapper[7599]: I0313 01:12:35.618854 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-encryption-config\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.624797 master-0 kubenswrapper[7599]: I0313 01:12:35.624673 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-client\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:35.647667 master-0 kubenswrapper[7599]: I0313 01:12:35.647604 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q4gk4\" (UniqueName: \"kubernetes.io/projected/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-kube-api-access-q4gk4\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:36.137502 master-0 kubenswrapper[7599]: I0313 01:12:36.137423 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:36.138167 master-0 kubenswrapper[7599]: E0313 01:12:36.137623 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 01:12:36.138167 master-0 kubenswrapper[7599]: E0313 01:12:36.137708 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.137687063 +0000 UTC m=+16.409366477 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : secret "serving-cert" not found Mar 13 01:12:36.138271 master-0 kubenswrapper[7599]: I0313 01:12:36.138171 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:36.138317 master-0 kubenswrapper[7599]: E0313 01:12:36.138281 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 01:12:36.138317 master-0 kubenswrapper[7599]: E0313 01:12:36.138314 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:37.138304153 +0000 UTC m=+16.409983567 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : configmap "audit-0" not found Mar 13 01:12:37.167477 master-0 kubenswrapper[7599]: I0313 01:12:37.167402 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:37.168023 master-0 kubenswrapper[7599]: I0313 01:12:37.167571 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:37.168023 master-0 kubenswrapper[7599]: E0313 01:12:37.167723 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 01:12:37.168023 master-0 kubenswrapper[7599]: E0313 01:12:37.167818 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:39.167794635 +0000 UTC m=+18.439474039 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : configmap "audit-0" not found Mar 13 01:12:37.168417 master-0 kubenswrapper[7599]: E0313 01:12:37.168380 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 01:12:37.168474 master-0 kubenswrapper[7599]: E0313 01:12:37.168431 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:39.168419145 +0000 UTC m=+18.440098549 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : secret "serving-cert" not found Mar 13 01:12:37.780797 master-0 kubenswrapper[7599]: I0313 01:12:37.777718 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:37.780797 master-0 kubenswrapper[7599]: I0313 01:12:37.778306 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:37.780797 master-0 kubenswrapper[7599]: I0313 01:12:37.778581 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:37.780797 master-0 kubenswrapper[7599]: I0313 01:12:37.778674 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:37.780797 master-0 kubenswrapper[7599]: I0313 01:12:37.778724 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:37.785396 master-0 kubenswrapper[7599]: I0313 01:12:37.784290 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:37.789445 master-0 kubenswrapper[7599]: I0313 01:12:37.789410 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:37.789445 master-0 kubenswrapper[7599]: I0313 01:12:37.789434 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:37.789632 master-0 kubenswrapper[7599]: I0313 01:12:37.789578 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:37.792972 master-0 kubenswrapper[7599]: I0313 01:12:37.792929 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:37.831265 master-0 kubenswrapper[7599]: I0313 01:12:37.827709 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:37.837159 master-0 kubenswrapper[7599]: I0313 01:12:37.837113 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:37.843861 master-0 kubenswrapper[7599]: I0313 01:12:37.843829 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:37.844045 master-0 kubenswrapper[7599]: I0313 01:12:37.843990 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:12:37.844371 master-0 kubenswrapper[7599]: I0313 01:12:37.844328 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:37.880193 master-0 kubenswrapper[7599]: I0313 01:12:37.880133 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:37.880429 master-0 kubenswrapper[7599]: I0313 01:12:37.880269 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:37.884977 master-0 kubenswrapper[7599]: I0313 01:12:37.884921 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:37.887081 master-0 kubenswrapper[7599]: I0313 01:12:37.887027 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:38.147141 master-0 kubenswrapper[7599]: I0313 01:12:38.146995 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:12:38.147325 master-0 kubenswrapper[7599]: I0313 01:12:38.146995 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:12:38.386381 master-0 kubenswrapper[7599]: I0313 01:12:38.386099 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:38.386381 master-0 kubenswrapper[7599]: E0313 01:12:38.386344 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:38.386925 master-0 kubenswrapper[7599]: I0313 01:12:38.386381 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert\") pod \"controller-manager-575fbbbb98-mc6fk\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:38.386925 master-0 kubenswrapper[7599]: E0313 01:12:38.386480 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:46.386447767 +0000 UTC m=+25.658127191 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : configmap "client-ca" not found Mar 13 01:12:38.386925 master-0 kubenswrapper[7599]: E0313 01:12:38.386486 7599 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: secret "serving-cert" not found Mar 13 01:12:38.386925 master-0 kubenswrapper[7599]: E0313 01:12:38.386654 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert podName:ab4408ab-5c90-46c2-9483-27974e568361 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:46.386623373 +0000 UTC m=+25.658302947 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert") pod "controller-manager-575fbbbb98-mc6fk" (UID: "ab4408ab-5c90-46c2-9483-27974e568361") : secret "serving-cert" not found Mar 13 01:12:39.196337 master-0 kubenswrapper[7599]: I0313 01:12:39.196237 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:39.196682 master-0 kubenswrapper[7599]: E0313 01:12:39.196448 7599 secret.go:189] Couldn't get secret openshift-apiserver/serving-cert: secret "serving-cert" not found Mar 13 01:12:39.196682 master-0 kubenswrapper[7599]: E0313 01:12:39.196573 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:43.196546778 +0000 UTC m=+22.468226352 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : secret "serving-cert" not found Mar 13 01:12:39.196986 master-0 kubenswrapper[7599]: I0313 01:12:39.196946 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:39.197142 master-0 kubenswrapper[7599]: E0313 01:12:39.197046 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 01:12:39.197318 master-0 kubenswrapper[7599]: E0313 01:12:39.197297 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:43.197271781 +0000 UTC m=+22.468951205 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : configmap "audit-0" not found Mar 13 01:12:43.305983 master-0 kubenswrapper[7599]: I0313 01:12:43.257828 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:43.305983 master-0 kubenswrapper[7599]: I0313 01:12:43.257925 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:43.305983 master-0 kubenswrapper[7599]: I0313 01:12:43.257977 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:43.305983 master-0 kubenswrapper[7599]: I0313 01:12:43.258000 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:43.307335 master-0 kubenswrapper[7599]: E0313 01:12:43.306766 7599 configmap.go:193] Couldn't get configMap openshift-apiserver/audit-0: configmap "audit-0" not found Mar 13 01:12:43.307420 master-0 kubenswrapper[7599]: E0313 01:12:43.307395 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit podName:cb3c0d3e-8143-4cfe-b438-6b02112f7cc3 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:51.307356709 +0000 UTC m=+30.579036103 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "audit" (UniqueName: "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit") pod "apiserver-86544774bc-rdwxl" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3") : configmap "audit-0" not found Mar 13 01:12:43.308207 master-0 kubenswrapper[7599]: E0313 01:12:43.308168 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:43.308287 master-0 kubenswrapper[7599]: E0313 01:12:43.308213 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca podName:50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c nodeName:}" failed. No retries permitted until 2026-03-13 01:12:59.308202916 +0000 UTC m=+38.579882310 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca") pod "route-controller-manager-5cbd8bb87d-t6wm9" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c") : configmap "client-ca" not found Mar 13 01:12:43.308429 master-0 kubenswrapper[7599]: I0313 01:12:43.308385 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"apiserver-86544774bc-rdwxl\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:43.319243 master-0 kubenswrapper[7599]: I0313 01:12:43.318921 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"route-controller-manager-5cbd8bb87d-t6wm9\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:43.911980 master-0 kubenswrapper[7599]: I0313 01:12:43.911666 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-86544774bc-rdwxl"] Mar 13 01:12:43.912247 master-0 kubenswrapper[7599]: E0313 01:12:43.912176 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[audit], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-apiserver/apiserver-86544774bc-rdwxl" podUID="cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" Mar 13 01:12:43.935697 master-0 kubenswrapper[7599]: I0313 01:12:43.935653 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 01:12:43.936486 master-0 kubenswrapper[7599]: I0313 01:12:43.936468 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:43.943256 master-0 kubenswrapper[7599]: I0313 01:12:43.943185 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 01:12:43.963312 master-0 kubenswrapper[7599]: I0313 01:12:43.960973 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 01:12:43.969423 master-0 kubenswrapper[7599]: I0313 01:12:43.969318 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:43.969423 master-0 kubenswrapper[7599]: I0313 01:12:43.969411 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-var-lock\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:43.969637 master-0 kubenswrapper[7599]: I0313 01:12:43.969499 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.070275 master-0 kubenswrapper[7599]: I0313 01:12:44.070186 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.070611 master-0 kubenswrapper[7599]: I0313 01:12:44.070548 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-var-lock\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.070908 master-0 kubenswrapper[7599]: I0313 01:12:44.070760 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-var-lock\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.071205 master-0 kubenswrapper[7599]: I0313 01:12:44.071110 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.071316 master-0 kubenswrapper[7599]: I0313 01:12:44.071283 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.113751 master-0 kubenswrapper[7599]: I0313 01:12:44.113639 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kube-api-access\") pod \"installer-1-master-0\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.269671 master-0 kubenswrapper[7599]: I0313 01:12:44.268329 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:12:44.390803 master-0 kubenswrapper[7599]: I0313 01:12:44.389006 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-575fbbbb98-mc6fk"] Mar 13 01:12:44.394764 master-0 kubenswrapper[7599]: E0313 01:12:44.394669 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca serving-cert], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" podUID="ab4408ab-5c90-46c2-9483-27974e568361" Mar 13 01:12:44.453948 master-0 kubenswrapper[7599]: I0313 01:12:44.453900 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9"] Mar 13 01:12:44.454553 master-0 kubenswrapper[7599]: E0313 01:12:44.454524 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" podUID="50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c" Mar 13 01:12:44.537746 master-0 kubenswrapper[7599]: I0313 01:12:44.537136 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" event={"ID":"75a53c09-210a-4346-99b0-a632b9e0a3c9","Type":"ContainerStarted","Data":"951aa4d6803ad0268be9d58f3b51ebac5555d4f85866ee29a2837692062094ee"} Mar 13 01:12:44.538983 master-0 kubenswrapper[7599]: I0313 01:12:44.538834 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" event={"ID":"2d368174-c659-444e-ba28-8fa267c0eda6","Type":"ContainerStarted","Data":"6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58"} Mar 13 01:12:44.538983 master-0 kubenswrapper[7599]: I0313 01:12:44.538916 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:44.539589 master-0 kubenswrapper[7599]: I0313 01:12:44.539541 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:44.541791 master-0 kubenswrapper[7599]: I0313 01:12:44.540403 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:44.547855 master-0 kubenswrapper[7599]: I0313 01:12:44.547482 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:44.552361 master-0 kubenswrapper[7599]: I0313 01:12:44.551371 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:44.554659 master-0 kubenswrapper[7599]: I0313 01:12:44.554631 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:44.592379 master-0 kubenswrapper[7599]: I0313 01:12:44.589927 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddtwn"] Mar 13 01:12:44.620824 master-0 kubenswrapper[7599]: I0313 01:12:44.611547 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg"] Mar 13 01:12:44.620824 master-0 kubenswrapper[7599]: I0313 01:12:44.611644 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-9hwz9"] Mar 13 01:12:44.690900 master-0 kubenswrapper[7599]: I0313 01:12:44.689146 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698179 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") pod \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698232 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-encryption-config\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698253 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-client\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698270 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit-dir\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698289 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-proxy-ca-bundles\") pod \"ab4408ab-5c90-46c2-9483-27974e568361\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698303 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-node-pullsecrets\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698321 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-config\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698345 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698364 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64q2s\" (UniqueName: \"kubernetes.io/projected/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-kube-api-access-64q2s\") pod \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698382 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-trusted-ca-bundle\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698409 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-config\") pod \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\" (UID: \"50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698427 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-config\") pod \"ab4408ab-5c90-46c2-9483-27974e568361\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698448 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pws4d\" (UniqueName: \"kubernetes.io/projected/ab4408ab-5c90-46c2-9483-27974e568361-kube-api-access-pws4d\") pod \"ab4408ab-5c90-46c2-9483-27974e568361\" (UID: \"ab4408ab-5c90-46c2-9483-27974e568361\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698465 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-serving-ca\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698482 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4gk4\" (UniqueName: \"kubernetes.io/projected/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-kube-api-access-q4gk4\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.698557 master-0 kubenswrapper[7599]: I0313 01:12:44.698505 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-image-import-ca\") pod \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\" (UID: \"cb3c0d3e-8143-4cfe-b438-6b02112f7cc3\") " Mar 13 01:12:44.702526 master-0 kubenswrapper[7599]: I0313 01:12:44.699570 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:12:44.702526 master-0 kubenswrapper[7599]: I0313 01:12:44.700414 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:44.702526 master-0 kubenswrapper[7599]: I0313 01:12:44.700891 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.709900 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-config" (OuterVolumeSpecName: "config") pod "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.710197 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.711497 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-config" (OuterVolumeSpecName: "config") pod "ab4408ab-5c90-46c2-9483-27974e568361" (UID: "ab4408ab-5c90-46c2-9483-27974e568361"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.711549 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.711955 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "ab4408ab-5c90-46c2-9483-27974e568361" (UID: "ab4408ab-5c90-46c2-9483-27974e568361"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.715956 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-config" (OuterVolumeSpecName: "config") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.722396 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.724410 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.724710 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ab4408ab-5c90-46c2-9483-27974e568361-kube-api-access-pws4d" (OuterVolumeSpecName: "kube-api-access-pws4d") pod "ab4408ab-5c90-46c2-9483-27974e568361" (UID: "ab4408ab-5c90-46c2-9483-27974e568361"). InnerVolumeSpecName "kube-api-access-pws4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:12:44.731287 master-0 kubenswrapper[7599]: I0313 01:12:44.724829 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:12:44.741901 master-0 kubenswrapper[7599]: I0313 01:12:44.736243 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-kube-api-access-64q2s" (OuterVolumeSpecName: "kube-api-access-64q2s") pod "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c" (UID: "50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c"). InnerVolumeSpecName "kube-api-access-64q2s". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:12:44.747695 master-0 kubenswrapper[7599]: W0313 01:12:44.744270 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6c32e816_aa69_4e9c_9fbf_56595c764f3b.slice/crio-063df2d43e5cbeb2c97fe2580ecd4460c3cfd1e7790de2a7bf5d6090738d8fb2 WatchSource:0}: Error finding container 063df2d43e5cbeb2c97fe2580ecd4460c3cfd1e7790de2a7bf5d6090738d8fb2: Status 404 returned error can't find the container with id 063df2d43e5cbeb2c97fe2580ecd4460c3cfd1e7790de2a7bf5d6090738d8fb2 Mar 13 01:12:44.747896 master-0 kubenswrapper[7599]: I0313 01:12:44.747815 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-kube-api-access-q4gk4" (OuterVolumeSpecName: "kube-api-access-q4gk4") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "kube-api-access-q4gk4". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:12:44.753651 master-0 kubenswrapper[7599]: I0313 01:12:44.748678 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" (UID: "cb3c0d3e-8143-4cfe-b438-6b02112f7cc3"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:12:44.780062 master-0 kubenswrapper[7599]: I0313 01:12:44.778502 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq"] Mar 13 01:12:44.780062 master-0 kubenswrapper[7599]: I0313 01:12:44.778567 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h"] Mar 13 01:12:44.803974 master-0 kubenswrapper[7599]: I0313 01:12:44.803905 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pws4d\" (UniqueName: \"kubernetes.io/projected/ab4408ab-5c90-46c2-9483-27974e568361-kube-api-access-pws4d\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.803974 master-0 kubenswrapper[7599]: I0313 01:12:44.803963 7599 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-serving-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.803979 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q4gk4\" (UniqueName: \"kubernetes.io/projected/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-kube-api-access-q4gk4\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.804025 7599 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-image-import-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.804041 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.804055 7599 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-encryption-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.804066 7599 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-etcd-client\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.804077 7599 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.804087 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804097 master-0 kubenswrapper[7599]: I0313 01:12:44.804099 7599 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-node-pullsecrets\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804308 master-0 kubenswrapper[7599]: I0313 01:12:44.804112 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804308 master-0 kubenswrapper[7599]: I0313 01:12:44.804124 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804308 master-0 kubenswrapper[7599]: I0313 01:12:44.804136 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64q2s\" (UniqueName: \"kubernetes.io/projected/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-kube-api-access-64q2s\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804308 master-0 kubenswrapper[7599]: I0313 01:12:44.804147 7599 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804308 master-0 kubenswrapper[7599]: I0313 01:12:44.804159 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.804447 master-0 kubenswrapper[7599]: I0313 01:12:44.804387 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:44.819700 master-0 kubenswrapper[7599]: I0313 01:12:44.816622 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-64bf9778cb-bx29h"] Mar 13 01:12:44.819700 master-0 kubenswrapper[7599]: I0313 01:12:44.817884 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7"] Mar 13 01:12:44.843860 master-0 kubenswrapper[7599]: W0313 01:12:44.843816 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46015913_c499_49b1_a9f6_a61c6e96b13f.slice/crio-1c7730337a9a87451fb670287a107087b846f8e46926bb6ce0f97f0cb44507c6 WatchSource:0}: Error finding container 1c7730337a9a87451fb670287a107087b846f8e46926bb6ce0f97f0cb44507c6: Status 404 returned error can't find the container with id 1c7730337a9a87451fb670287a107087b846f8e46926bb6ce0f97f0cb44507c6 Mar 13 01:12:45.060626 master-0 kubenswrapper[7599]: I0313 01:12:45.060565 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-node-tuning-operator/tuned-p9mnd"] Mar 13 01:12:45.061112 master-0 kubenswrapper[7599]: I0313 01:12:45.061068 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115725 master-0 kubenswrapper[7599]: I0313 01:12:45.115364 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-kubernetes\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115725 master-0 kubenswrapper[7599]: I0313 01:12:45.115725 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-845hm\" (UniqueName: \"kubernetes.io/projected/b74de987-7962-425e-9447-24b285eb888f-kube-api-access-845hm\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115750 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-var-lib-kubelet\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115767 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-run\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115795 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-lib-modules\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115815 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-etc-tuned\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115831 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-systemd\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115848 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysconfig\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115862 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.115897 master-0 kubenswrapper[7599]: I0313 01:12:45.115880 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-modprobe-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.116227 master-0 kubenswrapper[7599]: I0313 01:12:45.115912 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-host\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.116227 master-0 kubenswrapper[7599]: I0313 01:12:45.115929 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-conf\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.116227 master-0 kubenswrapper[7599]: I0313 01:12:45.115948 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-tmp\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.116227 master-0 kubenswrapper[7599]: I0313 01:12:45.115969 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-sys\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.216883 master-0 kubenswrapper[7599]: I0313 01:12:45.216833 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-kubernetes\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.216883 master-0 kubenswrapper[7599]: I0313 01:12:45.216882 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845hm\" (UniqueName: \"kubernetes.io/projected/b74de987-7962-425e-9447-24b285eb888f-kube-api-access-845hm\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217140 master-0 kubenswrapper[7599]: I0313 01:12:45.217070 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-kubernetes\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217140 master-0 kubenswrapper[7599]: I0313 01:12:45.217110 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-var-lib-kubelet\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217227 master-0 kubenswrapper[7599]: I0313 01:12:45.217193 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-run\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217771 master-0 kubenswrapper[7599]: I0313 01:12:45.217331 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-var-lib-kubelet\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217771 master-0 kubenswrapper[7599]: I0313 01:12:45.217402 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-run\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217771 master-0 kubenswrapper[7599]: I0313 01:12:45.217419 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-lib-modules\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217771 master-0 kubenswrapper[7599]: I0313 01:12:45.217570 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-etc-tuned\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217771 master-0 kubenswrapper[7599]: I0313 01:12:45.217593 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-systemd\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217771 master-0 kubenswrapper[7599]: I0313 01:12:45.217612 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysconfig\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.217771 master-0 kubenswrapper[7599]: I0313 01:12:45.217769 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.217805 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysconfig\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.217808 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-modprobe-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.217827 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.217870 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-lib-modules\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.217879 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-systemd\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.217881 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-modprobe-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.217973 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-conf\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.218008 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-host\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218066 master-0 kubenswrapper[7599]: I0313 01:12:45.218035 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-tmp\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218434 master-0 kubenswrapper[7599]: I0313 01:12:45.218086 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-host\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218434 master-0 kubenswrapper[7599]: I0313 01:12:45.218168 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-sys\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218434 master-0 kubenswrapper[7599]: I0313 01:12:45.218213 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-conf\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.218434 master-0 kubenswrapper[7599]: I0313 01:12:45.218238 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-sys\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.223415 master-0 kubenswrapper[7599]: I0313 01:12:45.222453 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-tmp\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.224438 master-0 kubenswrapper[7599]: I0313 01:12:45.223779 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-etc-tuned\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.242838 master-0 kubenswrapper[7599]: I0313 01:12:45.242791 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845hm\" (UniqueName: \"kubernetes.io/projected/b74de987-7962-425e-9447-24b285eb888f-kube-api-access-845hm\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.424905 master-0 kubenswrapper[7599]: I0313 01:12:45.424845 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:12:45.545353 master-0 kubenswrapper[7599]: I0313 01:12:45.545298 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" event={"ID":"91fc568a-61ad-400e-a54e-21d62e51bb17","Type":"ContainerStarted","Data":"73dc7164c08f806e20d59b39c0dd97779a41348b9dd0a6d8c110bba4b0c80b70"} Mar 13 01:12:45.547562 master-0 kubenswrapper[7599]: I0313 01:12:45.547534 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" event={"ID":"8ad2a6d5-6edf-4840-89f9-47847c8dac05","Type":"ContainerStarted","Data":"5cdd48b8a2071aa3abf6b5c8005e72c1dbb38aa6a21e58f6cbdd8c251468cb41"} Mar 13 01:12:45.549943 master-0 kubenswrapper[7599]: I0313 01:12:45.549910 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" event={"ID":"75a53c09-210a-4346-99b0-a632b9e0a3c9","Type":"ContainerStarted","Data":"b3ca312ecb0c539d72a6fc2c44f3014ac7fd23efcdc15a549b8dee3a7ac98d2e"} Mar 13 01:12:45.560165 master-0 kubenswrapper[7599]: I0313 01:12:45.560090 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" event={"ID":"161d2fa6-a541-427a-a3e9-3297102a26f5","Type":"ContainerStarted","Data":"d285e2cd3ad810bbe2e32e2bf486a60f25f240f9aaa8797930d7581cb9051bc3"} Mar 13 01:12:45.561158 master-0 kubenswrapper[7599]: I0313 01:12:45.561124 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9hwz9" event={"ID":"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d","Type":"ContainerStarted","Data":"770fca1b39851d439e2eba8f53f5e8c6629f240ddb04931d7537be93916cfc27"} Mar 13 01:12:45.563015 master-0 kubenswrapper[7599]: I0313 01:12:45.562986 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" event={"ID":"7d874a21-43aa-4d81-b904-853fb3da5a94","Type":"ContainerStarted","Data":"fcbf9c0fa7c766ce20c025d362ad77dbeed190f70e57f548e0926fe9e857ae68"} Mar 13 01:12:45.567496 master-0 kubenswrapper[7599]: I0313 01:12:45.566664 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" event={"ID":"7d874a21-43aa-4d81-b904-853fb3da5a94","Type":"ContainerStarted","Data":"575142f53af21edb636d8632f76041e7320e9575ffe0381edac48ae0613b2525"} Mar 13 01:12:45.570499 master-0 kubenswrapper[7599]: I0313 01:12:45.569615 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"6c32e816-aa69-4e9c-9fbf-56595c764f3b","Type":"ContainerStarted","Data":"17c0598fb82fc85207d161703480300077fafb1372eee649f6385e8290aca19a"} Mar 13 01:12:45.570499 master-0 kubenswrapper[7599]: I0313 01:12:45.569683 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"6c32e816-aa69-4e9c-9fbf-56595c764f3b","Type":"ContainerStarted","Data":"063df2d43e5cbeb2c97fe2580ecd4460c3cfd1e7790de2a7bf5d6090738d8fb2"} Mar 13 01:12:45.584396 master-0 kubenswrapper[7599]: I0313 01:12:45.581760 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" event={"ID":"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59","Type":"ContainerStarted","Data":"b633d50dd8474927c7d81a658adea877ff67711d806c3c9f6845e451f193d126"} Mar 13 01:12:45.584396 master-0 kubenswrapper[7599]: I0313 01:12:45.581838 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" event={"ID":"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59","Type":"ContainerStarted","Data":"658f47ce3c2ae2a79030288ee1e25fc5980adee4919ddd23b5841d0fa0c0c0bb"} Mar 13 01:12:45.584396 master-0 kubenswrapper[7599]: I0313 01:12:45.582919 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" event={"ID":"b74de987-7962-425e-9447-24b285eb888f","Type":"ContainerStarted","Data":"e67322ebf08e67e6e6e392d94d6e4bdf78d8f90a976d5648ecc81afeecfa52e6"} Mar 13 01:12:45.588380 master-0 kubenswrapper[7599]: I0313 01:12:45.587587 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" event={"ID":"6ad2904e-ece9-4d72-8683-c3e691e07497","Type":"ContainerStarted","Data":"c598fb9b925a609d9065bd53d80c03d631ad5c318188796c910960611dc611f4"} Mar 13 01:12:45.588380 master-0 kubenswrapper[7599]: I0313 01:12:45.587926 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" event={"ID":"46015913-c499-49b1-a9f6-a61c6e96b13f","Type":"ContainerStarted","Data":"1c7730337a9a87451fb670287a107087b846f8e46926bb6ce0f97f0cb44507c6"} Mar 13 01:12:45.599407 master-0 kubenswrapper[7599]: I0313 01:12:45.594899 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" event={"ID":"31f19d97-50f9-4486-a8f9-df61ef2b0528","Type":"ContainerStarted","Data":"061fc67620de1b52747445ea534c41ab6513f37b1f03a4e68b4308398d499797"} Mar 13 01:12:45.601671 master-0 kubenswrapper[7599]: I0313 01:12:45.601570 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" event={"ID":"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7","Type":"ContainerStarted","Data":"a5eb96a4d4ede22b3223c3ca47936d4bf89e778e44ce7bc9963d80d230415d56"} Mar 13 01:12:45.606588 master-0 kubenswrapper[7599]: I0313 01:12:45.606545 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" event={"ID":"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a","Type":"ContainerStarted","Data":"c427477a6d58f1162b2fff7d8283200b9284d7a746e34cf1c1801ed10b839ebf"} Mar 13 01:12:45.606736 master-0 kubenswrapper[7599]: I0313 01:12:45.606650 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-86544774bc-rdwxl" Mar 13 01:12:45.607000 master-0 kubenswrapper[7599]: I0313 01:12:45.606975 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-575fbbbb98-mc6fk" Mar 13 01:12:45.608107 master-0 kubenswrapper[7599]: I0313 01:12:45.608047 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9" Mar 13 01:12:45.631836 master-0 kubenswrapper[7599]: I0313 01:12:45.630589 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-pfsjd"] Mar 13 01:12:45.631836 master-0 kubenswrapper[7599]: I0313 01:12:45.631166 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.635934 master-0 kubenswrapper[7599]: I0313 01:12:45.635906 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 01:12:45.636068 master-0 kubenswrapper[7599]: I0313 01:12:45.636050 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 01:12:45.636751 master-0 kubenswrapper[7599]: I0313 01:12:45.636687 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 01:12:45.636751 master-0 kubenswrapper[7599]: I0313 01:12:45.636725 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 01:12:45.644493 master-0 kubenswrapper[7599]: I0313 01:12:45.644410 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pfsjd"] Mar 13 01:12:45.645325 master-0 kubenswrapper[7599]: I0313 01:12:45.645269 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-1-master-0" podStartSLOduration=2.645259379 podStartE2EDuration="2.645259379s" podCreationTimestamp="2026-03-13 01:12:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:45.643078999 +0000 UTC m=+24.914758393" watchObservedRunningTime="2026-03-13 01:12:45.645259379 +0000 UTC m=+24.916938773" Mar 13 01:12:45.722538 master-0 kubenswrapper[7599]: I0313 01:12:45.719820 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-7dbfb86fbb-mc7xz"] Mar 13 01:12:45.722538 master-0 kubenswrapper[7599]: I0313 01:12:45.721675 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.726079 master-0 kubenswrapper[7599]: I0313 01:12:45.725532 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 01:12:45.726079 master-0 kubenswrapper[7599]: I0313 01:12:45.725855 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 01:12:45.726245 master-0 kubenswrapper[7599]: I0313 01:12:45.726146 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-apiserver/apiserver-86544774bc-rdwxl"] Mar 13 01:12:45.731591 master-0 kubenswrapper[7599]: I0313 01:12:45.729148 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 01:12:45.735533 master-0 kubenswrapper[7599]: I0313 01:12:45.731830 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 01:12:45.735533 master-0 kubenswrapper[7599]: I0313 01:12:45.733726 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 01:12:45.735533 master-0 kubenswrapper[7599]: I0313 01:12:45.733903 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 01:12:45.735533 master-0 kubenswrapper[7599]: I0313 01:12:45.734120 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 01:12:45.735533 master-0 kubenswrapper[7599]: I0313 01:12:45.734259 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 01:12:45.735533 master-0 kubenswrapper[7599]: I0313 01:12:45.734475 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 01:12:45.735533 master-0 kubenswrapper[7599]: I0313 01:12:45.734622 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 01:12:45.746429 master-0 kubenswrapper[7599]: I0313 01:12:45.746344 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7dbfb86fbb-mc7xz"] Mar 13 01:12:45.748484 master-0 kubenswrapper[7599]: I0313 01:12:45.747679 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-apiserver/apiserver-86544774bc-rdwxl"] Mar 13 01:12:45.757467 master-0 kubenswrapper[7599]: I0313 01:12:45.741580 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit-dir\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757574 master-0 kubenswrapper[7599]: I0313 01:12:45.757526 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-trusted-ca-bundle\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757574 master-0 kubenswrapper[7599]: I0313 01:12:45.757560 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757643 master-0 kubenswrapper[7599]: I0313 01:12:45.757615 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wds6q\" (UniqueName: \"kubernetes.io/projected/95c7493b-ad9d-490e-83f3-aa28750b2b5e-kube-api-access-wds6q\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.757643 master-0 kubenswrapper[7599]: I0313 01:12:45.757635 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-encryption-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757694 master-0 kubenswrapper[7599]: I0313 01:12:45.757654 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lz8ww\" (UniqueName: \"kubernetes.io/projected/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-kube-api-access-lz8ww\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757694 master-0 kubenswrapper[7599]: I0313 01:12:45.757683 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-client\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757755 master-0 kubenswrapper[7599]: I0313 01:12:45.757702 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757834 master-0 kubenswrapper[7599]: I0313 01:12:45.757806 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-serving-cert\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757922 master-0 kubenswrapper[7599]: I0313 01:12:45.757892 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-serving-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.757954 master-0 kubenswrapper[7599]: I0313 01:12:45.757941 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-node-pullsecrets\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.758078 master-0 kubenswrapper[7599]: I0313 01:12:45.758049 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c7493b-ad9d-490e-83f3-aa28750b2b5e-config-volume\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.758114 master-0 kubenswrapper[7599]: I0313 01:12:45.758096 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-image-import-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.758156 master-0 kubenswrapper[7599]: I0313 01:12:45.758132 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.801044 master-0 kubenswrapper[7599]: I0313 01:12:45.798914 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-575fbbbb98-mc6fk"] Mar 13 01:12:45.801044 master-0 kubenswrapper[7599]: I0313 01:12:45.799646 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-575fbbbb98-mc6fk"] Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.859826 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c7493b-ad9d-490e-83f3-aa28750b2b5e-config-volume\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.859889 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-image-import-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.859923 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.859957 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit-dir\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.859981 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-trusted-ca-bundle\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860221 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit-dir\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: E0313 01:12:45.860342 7599 secret.go:189] Couldn't get secret openshift-dns/dns-default-metrics-tls: secret "dns-default-metrics-tls" not found Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: E0313 01:12:45.860398 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls podName:95c7493b-ad9d-490e-83f3-aa28750b2b5e nodeName:}" failed. No retries permitted until 2026-03-13 01:12:46.360377995 +0000 UTC m=+25.632057389 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-tls" (UniqueName: "kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls") pod "dns-default-pfsjd" (UID: "95c7493b-ad9d-490e-83f3-aa28750b2b5e") : secret "dns-default-metrics-tls" not found Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860670 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860711 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wds6q\" (UniqueName: \"kubernetes.io/projected/95c7493b-ad9d-490e-83f3-aa28750b2b5e-kube-api-access-wds6q\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860729 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-encryption-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860748 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz8ww\" (UniqueName: \"kubernetes.io/projected/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-kube-api-access-lz8ww\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860765 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-client\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860789 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860831 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-image-import-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860841 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-serving-cert\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.860888 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c7493b-ad9d-490e-83f3-aa28750b2b5e-config-volume\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.861363 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.864288 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-trusted-ca-bundle\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.864428 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-serving-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.864473 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-node-pullsecrets\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.864581 7599 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3-audit\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.864595 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ab4408ab-5c90-46c2-9483-27974e568361-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.864606 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ab4408ab-5c90-46c2-9483-27974e568361-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.864664 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-node-pullsecrets\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.872434 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-serving-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.872602 master-0 kubenswrapper[7599]: I0313 01:12:45.872451 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:45.955630 master-0 kubenswrapper[7599]: I0313 01:12:45.953318 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9"] Mar 13 01:12:45.956005 master-0 kubenswrapper[7599]: I0313 01:12:45.955303 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-5cbd8bb87d-t6wm9"] Mar 13 01:12:46.017528 master-0 kubenswrapper[7599]: I0313 01:12:46.013297 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-serving-cert\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:46.017528 master-0 kubenswrapper[7599]: I0313 01:12:46.013931 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-client\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:46.018240 master-0 kubenswrapper[7599]: I0313 01:12:46.018169 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-encryption-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:46.020837 master-0 kubenswrapper[7599]: I0313 01:12:46.020805 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz8ww\" (UniqueName: \"kubernetes.io/projected/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-kube-api-access-lz8ww\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:46.036356 master-0 kubenswrapper[7599]: I0313 01:12:46.036149 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wds6q\" (UniqueName: \"kubernetes.io/projected/95c7493b-ad9d-490e-83f3-aa28750b2b5e-kube-api-access-wds6q\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:46.083987 master-0 kubenswrapper[7599]: I0313 01:12:46.083863 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-xmwg6"] Mar 13 01:12:46.086059 master-0 kubenswrapper[7599]: I0313 01:12:46.085224 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:12:46.091519 master-0 kubenswrapper[7599]: I0313 01:12:46.091468 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:12:46.098767 master-0 kubenswrapper[7599]: I0313 01:12:46.098735 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.186319 master-0 kubenswrapper[7599]: I0313 01:12:46.186022 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd264af8-4ced-40c4-b4f6-202bab42d0cb-hosts-file\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.186319 master-0 kubenswrapper[7599]: I0313 01:12:46.186079 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xcf2h\" (UniqueName: \"kubernetes.io/projected/bd264af8-4ced-40c4-b4f6-202bab42d0cb-kube-api-access-xcf2h\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.287906 master-0 kubenswrapper[7599]: I0313 01:12:46.287852 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd264af8-4ced-40c4-b4f6-202bab42d0cb-hosts-file\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.288020 master-0 kubenswrapper[7599]: I0313 01:12:46.287908 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcf2h\" (UniqueName: \"kubernetes.io/projected/bd264af8-4ced-40c4-b4f6-202bab42d0cb-kube-api-access-xcf2h\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.289627 master-0 kubenswrapper[7599]: I0313 01:12:46.289561 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd264af8-4ced-40c4-b4f6-202bab42d0cb-hosts-file\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.312391 master-0 kubenswrapper[7599]: I0313 01:12:46.312320 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcf2h\" (UniqueName: \"kubernetes.io/projected/bd264af8-4ced-40c4-b4f6-202bab42d0cb-kube-api-access-xcf2h\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.391827 master-0 kubenswrapper[7599]: I0313 01:12:46.389462 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:46.394173 master-0 kubenswrapper[7599]: I0313 01:12:46.394124 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:46.403327 master-0 kubenswrapper[7599]: I0313 01:12:46.403277 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-7dbfb86fbb-mc7xz"] Mar 13 01:12:46.424736 master-0 kubenswrapper[7599]: W0313 01:12:46.424663 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbe2913a0_453b_4b24_ab2c_b8ef2ad3ac16.slice/crio-075e91dd63c1e740e494eccc3ead8f62731d857d106f25bfcfaa922018525117 WatchSource:0}: Error finding container 075e91dd63c1e740e494eccc3ead8f62731d857d106f25bfcfaa922018525117: Status 404 returned error can't find the container with id 075e91dd63c1e740e494eccc3ead8f62731d857d106f25bfcfaa922018525117 Mar 13 01:12:46.426247 master-0 kubenswrapper[7599]: I0313 01:12:46.426196 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:12:46.446160 master-0 kubenswrapper[7599]: W0313 01:12:46.445421 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd264af8_4ced_40c4_b4f6_202bab42d0cb.slice/crio-234a363db8b880e78d95679d40d82c251d6d6e0dfd2a1cd27b2a2de32ddb7344 WatchSource:0}: Error finding container 234a363db8b880e78d95679d40d82c251d6d6e0dfd2a1cd27b2a2de32ddb7344: Status 404 returned error can't find the container with id 234a363db8b880e78d95679d40d82c251d6d6e0dfd2a1cd27b2a2de32ddb7344 Mar 13 01:12:46.559089 master-0 kubenswrapper[7599]: I0313 01:12:46.558974 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:46.632369 master-0 kubenswrapper[7599]: I0313 01:12:46.632288 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" event={"ID":"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16","Type":"ContainerStarted","Data":"075e91dd63c1e740e494eccc3ead8f62731d857d106f25bfcfaa922018525117"} Mar 13 01:12:46.645281 master-0 kubenswrapper[7599]: I0313 01:12:46.644836 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xmwg6" event={"ID":"bd264af8-4ced-40c4-b4f6-202bab42d0cb","Type":"ContainerStarted","Data":"234a363db8b880e78d95679d40d82c251d6d6e0dfd2a1cd27b2a2de32ddb7344"} Mar 13 01:12:46.653224 master-0 kubenswrapper[7599]: I0313 01:12:46.653162 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" event={"ID":"b74de987-7962-425e-9447-24b285eb888f","Type":"ContainerStarted","Data":"900218ef3cf765c4bc489d2ae369c570207d3ab3fa29cef8db29627083e83d2c"} Mar 13 01:12:46.676658 master-0 kubenswrapper[7599]: I0313 01:12:46.676545 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" podStartSLOduration=1.676488467 podStartE2EDuration="1.676488467s" podCreationTimestamp="2026-03-13 01:12:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:46.674790103 +0000 UTC m=+25.946469517" watchObservedRunningTime="2026-03-13 01:12:46.676488467 +0000 UTC m=+25.948167861" Mar 13 01:12:46.835722 master-0 kubenswrapper[7599]: I0313 01:12:46.835596 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-pfsjd"] Mar 13 01:12:46.850116 master-0 kubenswrapper[7599]: W0313 01:12:46.848188 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95c7493b_ad9d_490e_83f3_aa28750b2b5e.slice/crio-29c3e604fa02f6812f7b745e1345b811004751bfdbd70448e21ada412112c94f WatchSource:0}: Error finding container 29c3e604fa02f6812f7b745e1345b811004751bfdbd70448e21ada412112c94f: Status 404 returned error can't find the container with id 29c3e604fa02f6812f7b745e1345b811004751bfdbd70448e21ada412112c94f Mar 13 01:12:46.989381 master-0 kubenswrapper[7599]: I0313 01:12:46.989344 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c" path="/var/lib/kubelet/pods/50eea1c7-fb9e-4b6f-b015-75ff1d8b8c3c/volumes" Mar 13 01:12:46.989725 master-0 kubenswrapper[7599]: I0313 01:12:46.989705 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ab4408ab-5c90-46c2-9483-27974e568361" path="/var/lib/kubelet/pods/ab4408ab-5c90-46c2-9483-27974e568361/volumes" Mar 13 01:12:46.990068 master-0 kubenswrapper[7599]: I0313 01:12:46.990049 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb3c0d3e-8143-4cfe-b438-6b02112f7cc3" path="/var/lib/kubelet/pods/cb3c0d3e-8143-4cfe-b438-6b02112f7cc3/volumes" Mar 13 01:12:47.657737 master-0 kubenswrapper[7599]: I0313 01:12:47.657603 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pfsjd" event={"ID":"95c7493b-ad9d-490e-83f3-aa28750b2b5e","Type":"ContainerStarted","Data":"29c3e604fa02f6812f7b745e1345b811004751bfdbd70448e21ada412112c94f"} Mar 13 01:12:47.660897 master-0 kubenswrapper[7599]: I0313 01:12:47.660854 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-xmwg6" event={"ID":"bd264af8-4ced-40c4-b4f6-202bab42d0cb","Type":"ContainerStarted","Data":"2a699ac2c4572c58d7b04e1f492fd18742ca5d3027251730b6d463287a8061ad"} Mar 13 01:12:47.676390 master-0 kubenswrapper[7599]: I0313 01:12:47.676296 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-xmwg6" podStartSLOduration=1.676280503 podStartE2EDuration="1.676280503s" podCreationTimestamp="2026-03-13 01:12:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:47.673800794 +0000 UTC m=+26.945480198" watchObservedRunningTime="2026-03-13 01:12:47.676280503 +0000 UTC m=+26.947959897" Mar 13 01:12:48.334653 master-0 kubenswrapper[7599]: I0313 01:12:48.332619 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p"] Mar 13 01:12:48.334653 master-0 kubenswrapper[7599]: I0313 01:12:48.333228 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.338420 master-0 kubenswrapper[7599]: I0313 01:12:48.336920 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h"] Mar 13 01:12:48.338420 master-0 kubenswrapper[7599]: I0313 01:12:48.337259 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.341206 master-0 kubenswrapper[7599]: I0313 01:12:48.339432 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 01:12:48.341206 master-0 kubenswrapper[7599]: I0313 01:12:48.339987 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 01:12:48.341206 master-0 kubenswrapper[7599]: I0313 01:12:48.340294 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 01:12:48.341206 master-0 kubenswrapper[7599]: I0313 01:12:48.340486 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:12:48.341206 master-0 kubenswrapper[7599]: I0313 01:12:48.341041 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:12:48.341389 master-0 kubenswrapper[7599]: I0313 01:12:48.341345 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 01:12:48.342527 master-0 kubenswrapper[7599]: I0313 01:12:48.341482 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:12:48.342527 master-0 kubenswrapper[7599]: I0313 01:12:48.341597 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:12:48.342527 master-0 kubenswrapper[7599]: I0313 01:12:48.341969 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:12:48.342527 master-0 kubenswrapper[7599]: I0313 01:12:48.342155 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 01:12:48.355504 master-0 kubenswrapper[7599]: I0313 01:12:48.350911 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:12:48.355504 master-0 kubenswrapper[7599]: I0313 01:12:48.353167 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h"] Mar 13 01:12:48.363010 master-0 kubenswrapper[7599]: I0313 01:12:48.362907 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p"] Mar 13 01:12:48.434105 master-0 kubenswrapper[7599]: I0313 01:12:48.434040 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-config\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.434105 master-0 kubenswrapper[7599]: I0313 01:12:48.434107 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-967kg\" (UniqueName: \"kubernetes.io/projected/8dc25b28-3de0-472d-afe3-198a83f112c1-kube-api-access-967kg\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.434736 master-0 kubenswrapper[7599]: I0313 01:12:48.434159 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.434736 master-0 kubenswrapper[7599]: I0313 01:12:48.434221 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-config\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.434736 master-0 kubenswrapper[7599]: I0313 01:12:48.434244 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v5szm\" (UniqueName: \"kubernetes.io/projected/37138c19-447a-4476-b108-08998f3a0f54-kube-api-access-v5szm\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.434736 master-0 kubenswrapper[7599]: I0313 01:12:48.434368 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37138c19-447a-4476-b108-08998f3a0f54-serving-cert\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.434736 master-0 kubenswrapper[7599]: I0313 01:12:48.434419 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc25b28-3de0-472d-afe3-198a83f112c1-serving-cert\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.434736 master-0 kubenswrapper[7599]: I0313 01:12:48.434555 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.434736 master-0 kubenswrapper[7599]: I0313 01:12:48.434607 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-proxy-ca-bundles\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.536093 master-0 kubenswrapper[7599]: I0313 01:12:48.535782 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-967kg\" (UniqueName: \"kubernetes.io/projected/8dc25b28-3de0-472d-afe3-198a83f112c1-kube-api-access-967kg\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536299 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536338 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-config\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536356 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v5szm\" (UniqueName: \"kubernetes.io/projected/37138c19-447a-4476-b108-08998f3a0f54-kube-api-access-v5szm\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536379 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37138c19-447a-4476-b108-08998f3a0f54-serving-cert\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536397 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc25b28-3de0-472d-afe3-198a83f112c1-serving-cert\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536420 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536436 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-proxy-ca-bundles\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.536456 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-config\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: E0313 01:12:48.537117 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: E0313 01:12:48.537225 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca podName:8dc25b28-3de0-472d-afe3-198a83f112c1 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:49.037194763 +0000 UTC m=+28.308874337 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca") pod "controller-manager-7f67fd7ddc-fvj8p" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1") : configmap "client-ca" not found Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: I0313 01:12:48.538079 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-config\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: E0313 01:12:48.538773 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:48.538885 master-0 kubenswrapper[7599]: E0313 01:12:48.538871 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca podName:37138c19-447a-4476-b108-08998f3a0f54 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:49.038851945 +0000 UTC m=+28.310531339 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca") pod "route-controller-manager-7974bfc85-x789h" (UID: "37138c19-447a-4476-b108-08998f3a0f54") : configmap "client-ca" not found Mar 13 01:12:48.541858 master-0 kubenswrapper[7599]: I0313 01:12:48.539257 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-config\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.541858 master-0 kubenswrapper[7599]: I0313 01:12:48.540447 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-proxy-ca-bundles\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.547151 master-0 kubenswrapper[7599]: I0313 01:12:48.545113 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc25b28-3de0-472d-afe3-198a83f112c1-serving-cert\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:48.555710 master-0 kubenswrapper[7599]: I0313 01:12:48.554582 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37138c19-447a-4476-b108-08998f3a0f54-serving-cert\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.557229 master-0 kubenswrapper[7599]: I0313 01:12:48.557202 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v5szm\" (UniqueName: \"kubernetes.io/projected/37138c19-447a-4476-b108-08998f3a0f54-kube-api-access-v5szm\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:48.560241 master-0 kubenswrapper[7599]: I0313 01:12:48.560162 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-967kg\" (UniqueName: \"kubernetes.io/projected/8dc25b28-3de0-472d-afe3-198a83f112c1-kube-api-access-967kg\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:49.044687 master-0 kubenswrapper[7599]: I0313 01:12:49.044619 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:49.045403 master-0 kubenswrapper[7599]: E0313 01:12:49.044852 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:49.045403 master-0 kubenswrapper[7599]: I0313 01:12:49.045012 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:49.045403 master-0 kubenswrapper[7599]: E0313 01:12:49.045089 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca podName:37138c19-447a-4476-b108-08998f3a0f54 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:50.04506186 +0000 UTC m=+29.316741314 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca") pod "route-controller-manager-7974bfc85-x789h" (UID: "37138c19-447a-4476-b108-08998f3a0f54") : configmap "client-ca" not found Mar 13 01:12:49.045403 master-0 kubenswrapper[7599]: E0313 01:12:49.045176 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:49.045403 master-0 kubenswrapper[7599]: E0313 01:12:49.045245 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca podName:8dc25b28-3de0-472d-afe3-198a83f112c1 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:50.045227225 +0000 UTC m=+29.316906619 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca") pod "controller-manager-7f67fd7ddc-fvj8p" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1") : configmap "client-ca" not found Mar 13 01:12:49.252741 master-0 kubenswrapper[7599]: I0313 01:12:49.252681 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 01:12:49.253241 master-0 kubenswrapper[7599]: I0313 01:12:49.253216 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.255696 master-0 kubenswrapper[7599]: I0313 01:12:49.255669 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 01:12:49.260750 master-0 kubenswrapper[7599]: I0313 01:12:49.260715 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 01:12:49.348948 master-0 kubenswrapper[7599]: I0313 01:12:49.348791 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-var-lock\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.348948 master-0 kubenswrapper[7599]: I0313 01:12:49.348876 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb4407e-71fc-4684-aded-cc84f7e306dc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.349228 master-0 kubenswrapper[7599]: I0313 01:12:49.349174 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.450556 master-0 kubenswrapper[7599]: I0313 01:12:49.450460 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.450800 master-0 kubenswrapper[7599]: I0313 01:12:49.450648 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.450800 master-0 kubenswrapper[7599]: I0313 01:12:49.450741 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-var-lock\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.450859 master-0 kubenswrapper[7599]: I0313 01:12:49.450830 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-var-lock\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.450939 master-0 kubenswrapper[7599]: I0313 01:12:49.450914 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb4407e-71fc-4684-aded-cc84f7e306dc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.468727 master-0 kubenswrapper[7599]: I0313 01:12:49.468681 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb4407e-71fc-4684-aded-cc84f7e306dc-kube-api-access\") pod \"installer-1-master-0\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.575363 master-0 kubenswrapper[7599]: I0313 01:12:49.575229 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 01:12:49.680003 master-0 kubenswrapper[7599]: I0313 01:12:49.679943 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-mkkgg" event={"ID":"69da0e58-2ae6-4d4b-b125-77e93df3d660","Type":"ContainerStarted","Data":"37229c9138aee35a0d3d32388e85a425105ba000ca7a2867995cce881e61cf2c"} Mar 13 01:12:50.061636 master-0 kubenswrapper[7599]: I0313 01:12:50.061568 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:50.061636 master-0 kubenswrapper[7599]: I0313 01:12:50.061643 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:50.062130 master-0 kubenswrapper[7599]: E0313 01:12:50.061742 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:50.062130 master-0 kubenswrapper[7599]: E0313 01:12:50.061795 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca podName:8dc25b28-3de0-472d-afe3-198a83f112c1 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:52.061779606 +0000 UTC m=+31.333459000 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca") pod "controller-manager-7f67fd7ddc-fvj8p" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1") : configmap "client-ca" not found Mar 13 01:12:50.062130 master-0 kubenswrapper[7599]: E0313 01:12:50.062112 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:50.062130 master-0 kubenswrapper[7599]: E0313 01:12:50.062134 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca podName:37138c19-447a-4476-b108-08998f3a0f54 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:52.062127437 +0000 UTC m=+31.333806831 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca") pod "route-controller-manager-7974bfc85-x789h" (UID: "37138c19-447a-4476-b108-08998f3a0f54") : configmap "client-ca" not found Mar 13 01:12:50.253086 master-0 kubenswrapper[7599]: I0313 01:12:50.253035 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st"] Mar 13 01:12:50.254386 master-0 kubenswrapper[7599]: I0313 01:12:50.253846 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.260162 master-0 kubenswrapper[7599]: I0313 01:12:50.260105 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 01:12:50.260775 master-0 kubenswrapper[7599]: I0313 01:12:50.260554 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 01:12:50.264417 master-0 kubenswrapper[7599]: I0313 01:12:50.264241 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 01:12:50.264417 master-0 kubenswrapper[7599]: I0313 01:12:50.264310 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 01:12:50.264417 master-0 kubenswrapper[7599]: I0313 01:12:50.264341 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 01:12:50.264574 master-0 kubenswrapper[7599]: I0313 01:12:50.264453 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 01:12:50.270588 master-0 kubenswrapper[7599]: I0313 01:12:50.268307 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 01:12:50.270588 master-0 kubenswrapper[7599]: I0313 01:12:50.268820 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 01:12:50.285800 master-0 kubenswrapper[7599]: I0313 01:12:50.285754 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st"] Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365526 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-client\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365577 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-serving-ca\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365630 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pt5g7\" (UniqueName: \"kubernetes.io/projected/536a2de1-e13c-47d1-b61d-88e0a5fd2851-kube-api-access-pt5g7\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365658 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-trusted-ca-bundle\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365683 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-dir\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365699 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-encryption-config\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365726 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-policies\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.370277 master-0 kubenswrapper[7599]: I0313 01:12:50.365786 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-serving-cert\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466490 master-0 kubenswrapper[7599]: I0313 01:12:50.466438 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-serving-cert\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466490 master-0 kubenswrapper[7599]: I0313 01:12:50.466501 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-client\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466750 master-0 kubenswrapper[7599]: I0313 01:12:50.466533 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-serving-ca\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466750 master-0 kubenswrapper[7599]: I0313 01:12:50.466576 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5g7\" (UniqueName: \"kubernetes.io/projected/536a2de1-e13c-47d1-b61d-88e0a5fd2851-kube-api-access-pt5g7\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466750 master-0 kubenswrapper[7599]: I0313 01:12:50.466602 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-trusted-ca-bundle\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466750 master-0 kubenswrapper[7599]: I0313 01:12:50.466620 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-dir\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466878 master-0 kubenswrapper[7599]: I0313 01:12:50.466818 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-encryption-config\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.466918 master-0 kubenswrapper[7599]: I0313 01:12:50.466877 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-dir\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.467793 master-0 kubenswrapper[7599]: I0313 01:12:50.467619 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-policies\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.468117 master-0 kubenswrapper[7599]: I0313 01:12:50.468076 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-trusted-ca-bundle\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.468471 master-0 kubenswrapper[7599]: I0313 01:12:50.468447 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-policies\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.469990 master-0 kubenswrapper[7599]: I0313 01:12:50.469955 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-serving-ca\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.472265 master-0 kubenswrapper[7599]: I0313 01:12:50.472243 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-encryption-config\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.474446 master-0 kubenswrapper[7599]: I0313 01:12:50.474380 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-serving-cert\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.477770 master-0 kubenswrapper[7599]: I0313 01:12:50.477725 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-client\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.488082 master-0 kubenswrapper[7599]: I0313 01:12:50.487964 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5g7\" (UniqueName: \"kubernetes.io/projected/536a2de1-e13c-47d1-b61d-88e0a5fd2851-kube-api-access-pt5g7\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:50.611604 master-0 kubenswrapper[7599]: I0313 01:12:50.611042 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:12:52.024021 master-0 kubenswrapper[7599]: I0313 01:12:52.020954 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 01:12:52.024021 master-0 kubenswrapper[7599]: I0313 01:12:52.021231 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-1-master-0" podUID="6c32e816-aa69-4e9c-9fbf-56595c764f3b" containerName="installer" containerID="cri-o://17c0598fb82fc85207d161703480300077fafb1372eee649f6385e8290aca19a" gracePeriod=30 Mar 13 01:12:52.093536 master-0 kubenswrapper[7599]: I0313 01:12:52.093428 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:52.093734 master-0 kubenswrapper[7599]: E0313 01:12:52.093637 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:52.093734 master-0 kubenswrapper[7599]: E0313 01:12:52.093714 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca podName:37138c19-447a-4476-b108-08998f3a0f54 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:56.093683018 +0000 UTC m=+35.365362412 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca") pod "route-controller-manager-7974bfc85-x789h" (UID: "37138c19-447a-4476-b108-08998f3a0f54") : configmap "client-ca" not found Mar 13 01:12:52.093800 master-0 kubenswrapper[7599]: E0313 01:12:52.093767 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:52.093800 master-0 kubenswrapper[7599]: E0313 01:12:52.093794 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca podName:8dc25b28-3de0-472d-afe3-198a83f112c1 nodeName:}" failed. No retries permitted until 2026-03-13 01:12:56.093787572 +0000 UTC m=+35.365466966 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca") pod "controller-manager-7f67fd7ddc-fvj8p" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1") : configmap "client-ca" not found Mar 13 01:12:52.093931 master-0 kubenswrapper[7599]: I0313 01:12:52.093902 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:54.728483 master-0 kubenswrapper[7599]: I0313 01:12:54.728416 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 01:12:54.729164 master-0 kubenswrapper[7599]: I0313 01:12:54.729140 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.772585 master-0 kubenswrapper[7599]: I0313 01:12:54.769766 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 01:12:54.847604 master-0 kubenswrapper[7599]: I0313 01:12:54.847542 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.847604 master-0 kubenswrapper[7599]: I0313 01:12:54.847594 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-var-lock\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.847847 master-0 kubenswrapper[7599]: I0313 01:12:54.847629 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.956536 master-0 kubenswrapper[7599]: I0313 01:12:54.950900 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.956536 master-0 kubenswrapper[7599]: I0313 01:12:54.950992 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-var-lock\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.956536 master-0 kubenswrapper[7599]: I0313 01:12:54.951463 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.956536 master-0 kubenswrapper[7599]: I0313 01:12:54.951911 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:54.956536 master-0 kubenswrapper[7599]: I0313 01:12:54.951961 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-var-lock\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:55.007647 master-0 kubenswrapper[7599]: I0313 01:12:55.007537 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kube-api-access\") pod \"installer-2-master-0\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:55.047619 master-0 kubenswrapper[7599]: I0313 01:12:55.047546 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:12:56.171691 master-0 kubenswrapper[7599]: I0313 01:12:56.171622 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:12:56.172166 master-0 kubenswrapper[7599]: E0313 01:12:56.171814 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:56.172166 master-0 kubenswrapper[7599]: I0313 01:12:56.171883 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:12:56.172166 master-0 kubenswrapper[7599]: E0313 01:12:56.171926 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca podName:8dc25b28-3de0-472d-afe3-198a83f112c1 nodeName:}" failed. No retries permitted until 2026-03-13 01:13:04.171896784 +0000 UTC m=+43.443576268 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca") pod "controller-manager-7f67fd7ddc-fvj8p" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1") : configmap "client-ca" not found Mar 13 01:12:56.172166 master-0 kubenswrapper[7599]: E0313 01:12:56.172110 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:12:56.172354 master-0 kubenswrapper[7599]: E0313 01:12:56.172203 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca podName:37138c19-447a-4476-b108-08998f3a0f54 nodeName:}" failed. No retries permitted until 2026-03-13 01:13:04.172177822 +0000 UTC m=+43.443857356 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca") pod "route-controller-manager-7974bfc85-x789h" (UID: "37138c19-447a-4476-b108-08998f3a0f54") : configmap "client-ca" not found Mar 13 01:12:56.233193 master-0 kubenswrapper[7599]: I0313 01:12:56.233137 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz"] Mar 13 01:12:56.237537 master-0 kubenswrapper[7599]: I0313 01:12:56.233847 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.244334 master-0 kubenswrapper[7599]: I0313 01:12:56.244274 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 01:12:56.245311 master-0 kubenswrapper[7599]: I0313 01:12:56.245270 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 01:12:56.246702 master-0 kubenswrapper[7599]: I0313 01:12:56.246670 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 01:12:56.250767 master-0 kubenswrapper[7599]: I0313 01:12:56.250710 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 01:12:56.327207 master-0 kubenswrapper[7599]: I0313 01:12:56.325101 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz"] Mar 13 01:12:56.374681 master-0 kubenswrapper[7599]: I0313 01:12:56.374639 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.374770 master-0 kubenswrapper[7599]: I0313 01:12:56.374687 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.374770 master-0 kubenswrapper[7599]: I0313 01:12:56.374713 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/81835d51-a414-440f-889b-690561e98d6a-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.374770 master-0 kubenswrapper[7599]: I0313 01:12:56.374749 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nd8dv\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-kube-api-access-nd8dv\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.374863 master-0 kubenswrapper[7599]: I0313 01:12:56.374777 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81835d51-a414-440f-889b-690561e98d6a-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.374863 master-0 kubenswrapper[7599]: I0313 01:12:56.374806 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.405343 master-0 kubenswrapper[7599]: I0313 01:12:56.395749 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252"] Mar 13 01:12:56.405343 master-0 kubenswrapper[7599]: I0313 01:12:56.397412 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.405643 master-0 kubenswrapper[7599]: I0313 01:12:56.405489 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 01:12:56.405643 master-0 kubenswrapper[7599]: I0313 01:12:56.405569 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 01:12:56.421153 master-0 kubenswrapper[7599]: I0313 01:12:56.412217 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 01:12:56.476155 master-0 kubenswrapper[7599]: I0313 01:12:56.476122 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81835d51-a414-440f-889b-690561e98d6a-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.476242 master-0 kubenswrapper[7599]: I0313 01:12:56.476174 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.476458 master-0 kubenswrapper[7599]: I0313 01:12:56.476409 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.476661 master-0 kubenswrapper[7599]: I0313 01:12:56.476626 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81835d51-a414-440f-889b-690561e98d6a-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.476912 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbcg4\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-kube-api-access-nbcg4\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.476954 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.476996 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.477062 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.477103 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.477145 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.477168 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/81835d51-a414-440f-889b-690561e98d6a-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.477224 master-0 kubenswrapper[7599]: I0313 01:12:56.477208 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.479301 master-0 kubenswrapper[7599]: I0313 01:12:56.477284 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8dv\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-kube-api-access-nd8dv\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.479301 master-0 kubenswrapper[7599]: I0313 01:12:56.477456 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.480818 master-0 kubenswrapper[7599]: I0313 01:12:56.480785 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.482079 master-0 kubenswrapper[7599]: I0313 01:12:56.482041 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/81835d51-a414-440f-889b-690561e98d6a-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.519044 master-0 kubenswrapper[7599]: I0313 01:12:56.516890 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252"] Mar 13 01:12:56.578884 master-0 kubenswrapper[7599]: I0313 01:12:56.578754 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcg4\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-kube-api-access-nbcg4\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.578884 master-0 kubenswrapper[7599]: I0313 01:12:56.578798 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.578884 master-0 kubenswrapper[7599]: I0313 01:12:56.578856 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.579118 master-0 kubenswrapper[7599]: I0313 01:12:56.578907 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.579118 master-0 kubenswrapper[7599]: I0313 01:12:56.578926 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.579118 master-0 kubenswrapper[7599]: I0313 01:12:56.579012 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.579777 master-0 kubenswrapper[7599]: I0313 01:12:56.579752 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.579834 master-0 kubenswrapper[7599]: I0313 01:12:56.579809 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.582933 master-0 kubenswrapper[7599]: I0313 01:12:56.582910 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.642830 master-0 kubenswrapper[7599]: I0313 01:12:56.641270 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-1-master-0"] Mar 13 01:12:56.653179 master-0 kubenswrapper[7599]: I0313 01:12:56.653137 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8dv\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-kube-api-access-nd8dv\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:56.657867 master-0 kubenswrapper[7599]: I0313 01:12:56.657830 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcg4\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-kube-api-access-nbcg4\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.658681 master-0 kubenswrapper[7599]: W0313 01:12:56.658639 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddfb4407e_71fc_4684_aded_cc84f7e306dc.slice/crio-1c6514526947873408e0b49fddc6682f5c16ba101c6fab277e750a3d8d114b4c WatchSource:0}: Error finding container 1c6514526947873408e0b49fddc6682f5c16ba101c6fab277e750a3d8d114b4c: Status 404 returned error can't find the container with id 1c6514526947873408e0b49fddc6682f5c16ba101c6fab277e750a3d8d114b4c Mar 13 01:12:56.732727 master-0 kubenswrapper[7599]: I0313 01:12:56.731487 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 01:12:56.759375 master-0 kubenswrapper[7599]: I0313 01:12:56.759331 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" event={"ID":"8ad2a6d5-6edf-4840-89f9-47847c8dac05","Type":"ContainerStarted","Data":"94468d369b5f43adf08abc9d6a6230238254bef0eb81d4e6a3d5e925f29bcc13"} Mar 13 01:12:56.760361 master-0 kubenswrapper[7599]: I0313 01:12:56.760339 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:56.760856 master-0 kubenswrapper[7599]: I0313 01:12:56.760807 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:56.762570 master-0 kubenswrapper[7599]: I0313 01:12:56.762538 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" event={"ID":"161d2fa6-a541-427a-a3e9-3297102a26f5","Type":"ContainerStarted","Data":"8f8f696e9a8bf7dc6e42d0e7944725436b3a7019ffcb294c234c413493797ce3"} Mar 13 01:12:56.762686 master-0 kubenswrapper[7599]: I0313 01:12:56.762636 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:12:56.762838 master-0 kubenswrapper[7599]: I0313 01:12:56.762695 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:12:56.766001 master-0 kubenswrapper[7599]: I0313 01:12:56.765977 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"dfb4407e-71fc-4684-aded-cc84f7e306dc","Type":"ContainerStarted","Data":"1c6514526947873408e0b49fddc6682f5c16ba101c6fab277e750a3d8d114b4c"} Mar 13 01:12:56.769092 master-0 kubenswrapper[7599]: I0313 01:12:56.769059 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9hwz9" event={"ID":"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d","Type":"ContainerStarted","Data":"237f591b86c8ea4372dc28e77f58446762ef9f9fe304a7f69a12fe66b5b8cf9f"} Mar 13 01:12:56.769651 master-0 kubenswrapper[7599]: W0313 01:12:56.769618 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod77fb9b0a_8127_4594_99ae_98f9000d5cc4.slice/crio-4821123f75c4f98aac0c63ff5b60993794c88b10ede5c55baa01018f5c66cd39 WatchSource:0}: Error finding container 4821123f75c4f98aac0c63ff5b60993794c88b10ede5c55baa01018f5c66cd39: Status 404 returned error can't find the container with id 4821123f75c4f98aac0c63ff5b60993794c88b10ede5c55baa01018f5c66cd39 Mar 13 01:12:56.773333 master-0 kubenswrapper[7599]: I0313 01:12:56.773298 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" event={"ID":"46015913-c499-49b1-a9f6-a61c6e96b13f","Type":"ContainerStarted","Data":"66ffb2892c2dc53709fbc12486304b117a24bf62c9867bc4fb0d06da38dfc962"} Mar 13 01:12:56.820962 master-0 kubenswrapper[7599]: I0313 01:12:56.820608 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st"] Mar 13 01:12:56.820962 master-0 kubenswrapper[7599]: I0313 01:12:56.820889 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:12:56.936900 master-0 kubenswrapper[7599]: I0313 01:12:56.936855 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:12:57.679404 master-0 kubenswrapper[7599]: I0313 01:12:57.678144 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252"] Mar 13 01:12:57.679404 master-0 kubenswrapper[7599]: I0313 01:12:57.678217 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz"] Mar 13 01:12:57.794991 master-0 kubenswrapper[7599]: I0313 01:12:57.792419 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" event={"ID":"6ad2904e-ece9-4d72-8683-c3e691e07497","Type":"ContainerStarted","Data":"bfcb25774008adbc1b0e8f428d12cf425f48b4171fabdb2acdc8935de47c8a28"} Mar 13 01:12:57.794991 master-0 kubenswrapper[7599]: I0313 01:12:57.793247 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:57.798687 master-0 kubenswrapper[7599]: I0313 01:12:57.797647 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" event={"ID":"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59","Type":"ContainerStarted","Data":"e44a4909dcffde49ad35027597a7d7ccdbfe6e7971eece0f54a4f97505f5966a"} Mar 13 01:12:57.798687 master-0 kubenswrapper[7599]: I0313 01:12:57.798175 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:12:57.804833 master-0 kubenswrapper[7599]: I0313 01:12:57.801043 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" event={"ID":"161d2fa6-a541-427a-a3e9-3297102a26f5","Type":"ContainerStarted","Data":"29a58358b12bdde755e9400ad8a4200dcdb32c73e3b68b4a2a8493087061b74e"} Mar 13 01:12:57.804833 master-0 kubenswrapper[7599]: I0313 01:12:57.803593 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:12:57.804833 master-0 kubenswrapper[7599]: I0313 01:12:57.803866 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"77fb9b0a-8127-4594-99ae-98f9000d5cc4","Type":"ContainerStarted","Data":"5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a"} Mar 13 01:12:57.804833 master-0 kubenswrapper[7599]: I0313 01:12:57.803885 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"77fb9b0a-8127-4594-99ae-98f9000d5cc4","Type":"ContainerStarted","Data":"4821123f75c4f98aac0c63ff5b60993794c88b10ede5c55baa01018f5c66cd39"} Mar 13 01:12:57.821339 master-0 kubenswrapper[7599]: I0313 01:12:57.817674 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"dfb4407e-71fc-4684-aded-cc84f7e306dc","Type":"ContainerStarted","Data":"0f4de141c58d0310f424a3def148eab28bc960622ee39d63fb837590fa97a3c8"} Mar 13 01:12:57.823705 master-0 kubenswrapper[7599]: I0313 01:12:57.823672 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-9hwz9" event={"ID":"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d","Type":"ContainerStarted","Data":"aa55f97b290179f76298e012ba0c2f1aaa0f6734025a8b727341882069fe6cf7"} Mar 13 01:12:57.841266 master-0 kubenswrapper[7599]: I0313 01:12:57.840849 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pfsjd" event={"ID":"95c7493b-ad9d-490e-83f3-aa28750b2b5e","Type":"ContainerStarted","Data":"2ebf040cc83c123a3254e1881a9b23718e34291886f5643b11df339473e59c97"} Mar 13 01:12:57.841266 master-0 kubenswrapper[7599]: I0313 01:12:57.840895 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-pfsjd" event={"ID":"95c7493b-ad9d-490e-83f3-aa28750b2b5e","Type":"ContainerStarted","Data":"81cb8ce566d11b51b97eff7f24b7b19fdfbde0f6826f5d63e89c4cce5ad1b584"} Mar 13 01:12:57.841266 master-0 kubenswrapper[7599]: I0313 01:12:57.841260 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pfsjd" Mar 13 01:12:57.858569 master-0 kubenswrapper[7599]: I0313 01:12:57.858232 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" event={"ID":"81835d51-a414-440f-889b-690561e98d6a","Type":"ContainerStarted","Data":"6383bf63a7de4dff04fb7232e0771348dcd4ed98fc693d66e08acc1fc0e8ce69"} Mar 13 01:12:57.870745 master-0 kubenswrapper[7599]: I0313 01:12:57.870692 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" event={"ID":"31f19d97-50f9-4486-a8f9-df61ef2b0528","Type":"ContainerStarted","Data":"0cbc2482b818d8306ce3a221e22eb7d5321b94bb704f9dd43300b1e5cfa7fd67"} Mar 13 01:12:57.871484 master-0 kubenswrapper[7599]: I0313 01:12:57.871454 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:57.887678 master-0 kubenswrapper[7599]: I0313 01:12:57.879970 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:12:57.902501 master-0 kubenswrapper[7599]: I0313 01:12:57.900748 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" event={"ID":"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e","Type":"ContainerStarted","Data":"9d7ef7e44d8730ad2d704e378ac9c92d16d1c8fa25bdd5cfebf66d699f0e0906"} Mar 13 01:12:57.924051 master-0 kubenswrapper[7599]: I0313 01:12:57.916716 7599 generic.go:334] "Generic (PLEG): container finished" podID="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" containerID="bce7dc8174f12b3e41c7f7d3531e034e590edcaa83e3928c6f42ad9ec7e9122d" exitCode=0 Mar 13 01:12:57.924051 master-0 kubenswrapper[7599]: I0313 01:12:57.917809 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" event={"ID":"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16","Type":"ContainerDied","Data":"bce7dc8174f12b3e41c7f7d3531e034e590edcaa83e3928c6f42ad9ec7e9122d"} Mar 13 01:12:57.924051 master-0 kubenswrapper[7599]: I0313 01:12:57.923609 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" event={"ID":"536a2de1-e13c-47d1-b61d-88e0a5fd2851","Type":"ContainerStarted","Data":"50cd4dbba0595bc95bd8379d7cfd780825252615fdd5f10e3bb402ec0d1d10ce"} Mar 13 01:12:57.942538 master-0 kubenswrapper[7599]: I0313 01:12:57.932609 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-2-master-0" podStartSLOduration=3.932589061 podStartE2EDuration="3.932589061s" podCreationTimestamp="2026-03-13 01:12:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:57.93063935 +0000 UTC m=+37.202318754" watchObservedRunningTime="2026-03-13 01:12:57.932589061 +0000 UTC m=+37.204268455" Mar 13 01:12:57.951984 master-0 kubenswrapper[7599]: I0313 01:12:57.947327 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:12:58.011536 master-0 kubenswrapper[7599]: I0313 01:12:58.008030 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-pfsjd" podStartSLOduration=3.598073994 podStartE2EDuration="13.008008986s" podCreationTimestamp="2026-03-13 01:12:45 +0000 UTC" firstStartedPulling="2026-03-13 01:12:46.856085381 +0000 UTC m=+26.127764775" lastFinishedPulling="2026-03-13 01:12:56.266020363 +0000 UTC m=+35.537699767" observedRunningTime="2026-03-13 01:12:57.998286326 +0000 UTC m=+37.269965720" watchObservedRunningTime="2026-03-13 01:12:58.008008986 +0000 UTC m=+37.279688380" Mar 13 01:12:58.091856 master-0 kubenswrapper[7599]: I0313 01:12:58.090787 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-1-master-0" podStartSLOduration=9.090756023 podStartE2EDuration="9.090756023s" podCreationTimestamp="2026-03-13 01:12:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:58.089918577 +0000 UTC m=+37.361597981" watchObservedRunningTime="2026-03-13 01:12:58.090756023 +0000 UTC m=+37.362435417" Mar 13 01:12:58.408795 master-0 kubenswrapper[7599]: I0313 01:12:58.408727 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-jzlpt"] Mar 13 01:12:58.409638 master-0 kubenswrapper[7599]: I0313 01:12:58.409597 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.473850 master-0 kubenswrapper[7599]: I0313 01:12:58.464613 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzlpt"] Mar 13 01:12:58.509569 master-0 kubenswrapper[7599]: I0313 01:12:58.509441 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpp24\" (UniqueName: \"kubernetes.io/projected/40c57f94-16b7-4011-bc29-386d52a06d2a-kube-api-access-dpp24\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.509569 master-0 kubenswrapper[7599]: I0313 01:12:58.509490 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-utilities\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.509569 master-0 kubenswrapper[7599]: I0313 01:12:58.509559 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-catalog-content\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.610611 master-0 kubenswrapper[7599]: I0313 01:12:58.610574 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-utilities\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.610727 master-0 kubenswrapper[7599]: I0313 01:12:58.610642 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-catalog-content\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.610727 master-0 kubenswrapper[7599]: I0313 01:12:58.610669 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpp24\" (UniqueName: \"kubernetes.io/projected/40c57f94-16b7-4011-bc29-386d52a06d2a-kube-api-access-dpp24\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.611900 master-0 kubenswrapper[7599]: I0313 01:12:58.611863 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-utilities\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.612110 master-0 kubenswrapper[7599]: I0313 01:12:58.612059 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-catalog-content\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.629874 master-0 kubenswrapper[7599]: I0313 01:12:58.629824 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpp24\" (UniqueName: \"kubernetes.io/projected/40c57f94-16b7-4011-bc29-386d52a06d2a-kube-api-access-dpp24\") pod \"community-operators-jzlpt\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:58.756081 master-0 kubenswrapper[7599]: I0313 01:12:58.756007 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:12:59.023388 master-0 kubenswrapper[7599]: I0313 01:12:59.023316 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" event={"ID":"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e","Type":"ContainerStarted","Data":"0bc643f1562d747ecebe6ead7cded6ab7e4067e6e477b19445331f9f08f258c9"} Mar 13 01:12:59.023388 master-0 kubenswrapper[7599]: I0313 01:12:59.023378 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" event={"ID":"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e","Type":"ContainerStarted","Data":"fd379745af9da3dead649206438373348f4ca6dba57dff1deac4d0df35fc6fc1"} Mar 13 01:12:59.023660 master-0 kubenswrapper[7599]: I0313 01:12:59.023619 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:12:59.030817 master-0 kubenswrapper[7599]: I0313 01:12:59.030762 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" event={"ID":"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16","Type":"ContainerStarted","Data":"a2064cf0685584cb79bafc1228113bb0ec2e46f0c78d8ca4f8bb45f36b892e81"} Mar 13 01:12:59.030885 master-0 kubenswrapper[7599]: I0313 01:12:59.030827 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" event={"ID":"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16","Type":"ContainerStarted","Data":"1c03bc64990a10b260c0c55515d1fcfd0b6ad98935ba736f59b8e9fac792496d"} Mar 13 01:12:59.041292 master-0 kubenswrapper[7599]: I0313 01:12:59.041129 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" event={"ID":"81835d51-a414-440f-889b-690561e98d6a","Type":"ContainerStarted","Data":"e9eb86bc8639ac87892dc75bde4aa22bd6e683c301d4d69ac50acf0d02a2db39"} Mar 13 01:12:59.041399 master-0 kubenswrapper[7599]: I0313 01:12:59.041385 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" event={"ID":"81835d51-a414-440f-889b-690561e98d6a","Type":"ContainerStarted","Data":"1e2250b88cffcf608ae9e94e138fc99209d5f06734f2ab5f6162913a989a5e45"} Mar 13 01:12:59.063845 master-0 kubenswrapper[7599]: I0313 01:12:59.062698 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podStartSLOduration=3.062671381 podStartE2EDuration="3.062671381s" podCreationTimestamp="2026-03-13 01:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:59.058345603 +0000 UTC m=+38.330024997" watchObservedRunningTime="2026-03-13 01:12:59.062671381 +0000 UTC m=+38.334350785" Mar 13 01:12:59.083024 master-0 kubenswrapper[7599]: I0313 01:12:59.082969 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-jzlpt"] Mar 13 01:12:59.083218 master-0 kubenswrapper[7599]: I0313 01:12:59.083081 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" podStartSLOduration=6.245089827 podStartE2EDuration="16.083052041s" podCreationTimestamp="2026-03-13 01:12:43 +0000 UTC" firstStartedPulling="2026-03-13 01:12:46.428143992 +0000 UTC m=+25.699823386" lastFinishedPulling="2026-03-13 01:12:56.266106196 +0000 UTC m=+35.537785600" observedRunningTime="2026-03-13 01:12:59.082810463 +0000 UTC m=+38.354489847" watchObservedRunningTime="2026-03-13 01:12:59.083052041 +0000 UTC m=+38.354731435" Mar 13 01:12:59.202192 master-0 kubenswrapper[7599]: I0313 01:12:59.202113 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podStartSLOduration=3.202088825 podStartE2EDuration="3.202088825s" podCreationTimestamp="2026-03-13 01:12:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:12:59.106408305 +0000 UTC m=+38.378087709" watchObservedRunningTime="2026-03-13 01:12:59.202088825 +0000 UTC m=+38.473768229" Mar 13 01:12:59.203465 master-0 kubenswrapper[7599]: I0313 01:12:59.203439 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7mqtr"] Mar 13 01:12:59.204420 master-0 kubenswrapper[7599]: I0313 01:12:59.204386 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.262493 master-0 kubenswrapper[7599]: I0313 01:12:59.215686 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mqtr"] Mar 13 01:12:59.337479 master-0 kubenswrapper[7599]: I0313 01:12:59.337419 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-catalog-content\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.337630 master-0 kubenswrapper[7599]: I0313 01:12:59.337573 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnlk7\" (UniqueName: \"kubernetes.io/projected/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-kube-api-access-rnlk7\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.337689 master-0 kubenswrapper[7599]: I0313 01:12:59.337637 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-utilities\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.443629 master-0 kubenswrapper[7599]: I0313 01:12:59.439400 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-catalog-content\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.443629 master-0 kubenswrapper[7599]: I0313 01:12:59.439478 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnlk7\" (UniqueName: \"kubernetes.io/projected/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-kube-api-access-rnlk7\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.443629 master-0 kubenswrapper[7599]: I0313 01:12:59.439662 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-utilities\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.443629 master-0 kubenswrapper[7599]: I0313 01:12:59.440089 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-catalog-content\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.443629 master-0 kubenswrapper[7599]: I0313 01:12:59.440161 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-utilities\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.469749 master-0 kubenswrapper[7599]: I0313 01:12:59.465624 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnlk7\" (UniqueName: \"kubernetes.io/projected/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-kube-api-access-rnlk7\") pod \"redhat-marketplace-7mqtr\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.588959 master-0 kubenswrapper[7599]: I0313 01:12:59.588875 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:12:59.869153 master-0 kubenswrapper[7599]: I0313 01:12:59.869072 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs"] Mar 13 01:12:59.869726 master-0 kubenswrapper[7599]: I0313 01:12:59.869417 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" podUID="2d368174-c659-444e-ba28-8fa267c0eda6" containerName="cluster-version-operator" containerID="cri-o://6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58" gracePeriod=130 Mar 13 01:13:00.007743 master-0 kubenswrapper[7599]: I0313 01:13:00.007701 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.054735 7599 generic.go:334] "Generic (PLEG): container finished" podID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerID="66ac2b182d8988508548db956904c7eb36936256dfbe1d0d938933e382dd821d" exitCode=0 Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.054831 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzlpt" event={"ID":"40c57f94-16b7-4011-bc29-386d52a06d2a","Type":"ContainerDied","Data":"66ac2b182d8988508548db956904c7eb36936256dfbe1d0d938933e382dd821d"} Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.054868 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzlpt" event={"ID":"40c57f94-16b7-4011-bc29-386d52a06d2a","Type":"ContainerStarted","Data":"b499ba30f4ea8be865dc7a8837d7f5fa14f7ab7345bba4ad96fb42befea24a27"} Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.055609 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access\") pod \"2d368174-c659-444e-ba28-8fa267c0eda6\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.055693 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca\") pod \"2d368174-c659-444e-ba28-8fa267c0eda6\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.055748 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") pod \"2d368174-c659-444e-ba28-8fa267c0eda6\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.055822 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") pod \"2d368174-c659-444e-ba28-8fa267c0eda6\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " Mar 13 01:13:00.056211 master-0 kubenswrapper[7599]: I0313 01:13:00.055901 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") pod \"2d368174-c659-444e-ba28-8fa267c0eda6\" (UID: \"2d368174-c659-444e-ba28-8fa267c0eda6\") " Mar 13 01:13:00.056660 master-0 kubenswrapper[7599]: I0313 01:13:00.056246 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs" (OuterVolumeSpecName: "etc-ssl-certs") pod "2d368174-c659-444e-ba28-8fa267c0eda6" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6"). InnerVolumeSpecName "etc-ssl-certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:00.056660 master-0 kubenswrapper[7599]: I0313 01:13:00.056282 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads" (OuterVolumeSpecName: "etc-cvo-updatepayloads") pod "2d368174-c659-444e-ba28-8fa267c0eda6" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6"). InnerVolumeSpecName "etc-cvo-updatepayloads". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:00.056660 master-0 kubenswrapper[7599]: I0313 01:13:00.056434 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca" (OuterVolumeSpecName: "service-ca") pod "2d368174-c659-444e-ba28-8fa267c0eda6" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:00.058388 master-0 kubenswrapper[7599]: I0313 01:13:00.058237 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mqtr"] Mar 13 01:13:00.061060 master-0 kubenswrapper[7599]: I0313 01:13:00.060996 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2d368174-c659-444e-ba28-8fa267c0eda6" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:00.062553 master-0 kubenswrapper[7599]: I0313 01:13:00.062490 7599 generic.go:334] "Generic (PLEG): container finished" podID="2d368174-c659-444e-ba28-8fa267c0eda6" containerID="6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58" exitCode=0 Mar 13 01:13:00.063754 master-0 kubenswrapper[7599]: I0313 01:13:00.063495 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2d368174-c659-444e-ba28-8fa267c0eda6" (UID: "2d368174-c659-444e-ba28-8fa267c0eda6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:13:00.063754 master-0 kubenswrapper[7599]: I0313 01:13:00.063573 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" Mar 13 01:13:00.063843 master-0 kubenswrapper[7599]: I0313 01:13:00.063770 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" event={"ID":"2d368174-c659-444e-ba28-8fa267c0eda6","Type":"ContainerDied","Data":"6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58"} Mar 13 01:13:00.063875 master-0 kubenswrapper[7599]: I0313 01:13:00.063846 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs" event={"ID":"2d368174-c659-444e-ba28-8fa267c0eda6","Type":"ContainerDied","Data":"b54155a5db31eb0df3f308a670d9f6fabe70860c769343bf09370d04c49698f7"} Mar 13 01:13:00.063912 master-0 kubenswrapper[7599]: I0313 01:13:00.063899 7599 scope.go:117] "RemoveContainer" containerID="6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58" Mar 13 01:13:00.065113 master-0 kubenswrapper[7599]: I0313 01:13:00.065080 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:13:00.095564 master-0 kubenswrapper[7599]: W0313 01:13:00.089788 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9992615a_c49b_4ef0_b02b_c6cd2e719fa3.slice/crio-731c43764bf6ca60ccb49818767715764d1313ac8e97ad985509652329db44a1 WatchSource:0}: Error finding container 731c43764bf6ca60ccb49818767715764d1313ac8e97ad985509652329db44a1: Status 404 returned error can't find the container with id 731c43764bf6ca60ccb49818767715764d1313ac8e97ad985509652329db44a1 Mar 13 01:13:00.137605 master-0 kubenswrapper[7599]: I0313 01:13:00.136808 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs"] Mar 13 01:13:00.139388 master-0 kubenswrapper[7599]: I0313 01:13:00.138345 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-version/cluster-version-operator-745944c6b7-dqdgs"] Mar 13 01:13:00.140624 master-0 kubenswrapper[7599]: I0313 01:13:00.140461 7599 scope.go:117] "RemoveContainer" containerID="6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58" Mar 13 01:13:00.150822 master-0 kubenswrapper[7599]: E0313 01:13:00.150737 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58\": container with ID starting with 6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58 not found: ID does not exist" containerID="6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58" Mar 13 01:13:00.151097 master-0 kubenswrapper[7599]: I0313 01:13:00.150801 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58"} err="failed to get container status \"6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58\": rpc error: code = NotFound desc = could not find container \"6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58\": container with ID starting with 6fddca5498dd2d7907dc98f5dbc228a835e2d9f63bb0bb651d75d3af964f0f58 not found: ID does not exist" Mar 13 01:13:00.159587 master-0 kubenswrapper[7599]: I0313 01:13:00.157271 7599 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/2d368174-c659-444e-ba28-8fa267c0eda6-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:00.159587 master-0 kubenswrapper[7599]: I0313 01:13:00.157309 7599 reconciler_common.go:293] "Volume detached for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-cvo-updatepayloads\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:00.159587 master-0 kubenswrapper[7599]: I0313 01:13:00.157321 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d368174-c659-444e-ba28-8fa267c0eda6-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:00.159587 master-0 kubenswrapper[7599]: I0313 01:13:00.157331 7599 reconciler_common.go:293] "Volume detached for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/2d368174-c659-444e-ba28-8fa267c0eda6-etc-ssl-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:00.159587 master-0 kubenswrapper[7599]: I0313 01:13:00.157341 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2d368174-c659-444e-ba28-8fa267c0eda6-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:00.177980 master-0 kubenswrapper[7599]: I0313 01:13:00.177783 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v"] Mar 13 01:13:00.178133 master-0 kubenswrapper[7599]: E0313 01:13:00.178038 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d368174-c659-444e-ba28-8fa267c0eda6" containerName="cluster-version-operator" Mar 13 01:13:00.178133 master-0 kubenswrapper[7599]: I0313 01:13:00.178054 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d368174-c659-444e-ba28-8fa267c0eda6" containerName="cluster-version-operator" Mar 13 01:13:00.178204 master-0 kubenswrapper[7599]: I0313 01:13:00.178137 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d368174-c659-444e-ba28-8fa267c0eda6" containerName="cluster-version-operator" Mar 13 01:13:00.178974 master-0 kubenswrapper[7599]: I0313 01:13:00.178917 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.181358 master-0 kubenswrapper[7599]: I0313 01:13:00.181328 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 01:13:00.181655 master-0 kubenswrapper[7599]: I0313 01:13:00.181624 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 01:13:00.182152 master-0 kubenswrapper[7599]: I0313 01:13:00.182100 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 01:13:00.259362 master-0 kubenswrapper[7599]: I0313 01:13:00.258827 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.259362 master-0 kubenswrapper[7599]: I0313 01:13:00.258868 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.259362 master-0 kubenswrapper[7599]: I0313 01:13:00.258890 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.259362 master-0 kubenswrapper[7599]: I0313 01:13:00.258912 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.259362 master-0 kubenswrapper[7599]: I0313 01:13:00.258953 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.360295 master-0 kubenswrapper[7599]: I0313 01:13:00.360123 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.360708 master-0 kubenswrapper[7599]: I0313 01:13:00.360635 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.360708 master-0 kubenswrapper[7599]: I0313 01:13:00.360663 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.361482 master-0 kubenswrapper[7599]: I0313 01:13:00.360863 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.361482 master-0 kubenswrapper[7599]: I0313 01:13:00.360918 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.361482 master-0 kubenswrapper[7599]: I0313 01:13:00.361030 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.361482 master-0 kubenswrapper[7599]: I0313 01:13:00.361086 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.361955 master-0 kubenswrapper[7599]: I0313 01:13:00.361900 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.364194 master-0 kubenswrapper[7599]: I0313 01:13:00.364119 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.379114 master-0 kubenswrapper[7599]: I0313 01:13:00.379061 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.546255 master-0 kubenswrapper[7599]: I0313 01:13:00.546143 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:13:00.603993 master-0 kubenswrapper[7599]: I0313 01:13:00.602804 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-t88cc"] Mar 13 01:13:00.603993 master-0 kubenswrapper[7599]: I0313 01:13:00.603858 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.618178 master-0 kubenswrapper[7599]: I0313 01:13:00.618114 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t88cc"] Mar 13 01:13:00.665373 master-0 kubenswrapper[7599]: I0313 01:13:00.664073 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgzzr\" (UniqueName: \"kubernetes.io/projected/c6382e2a-ec14-4457-8f26-3087b19d1e1a-kube-api-access-pgzzr\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.665373 master-0 kubenswrapper[7599]: I0313 01:13:00.664146 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-utilities\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.665373 master-0 kubenswrapper[7599]: I0313 01:13:00.664190 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-catalog-content\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.765212 master-0 kubenswrapper[7599]: I0313 01:13:00.765145 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgzzr\" (UniqueName: \"kubernetes.io/projected/c6382e2a-ec14-4457-8f26-3087b19d1e1a-kube-api-access-pgzzr\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.765212 master-0 kubenswrapper[7599]: I0313 01:13:00.765209 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-utilities\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.765629 master-0 kubenswrapper[7599]: I0313 01:13:00.765602 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-catalog-content\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.765975 master-0 kubenswrapper[7599]: I0313 01:13:00.765711 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-utilities\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.765975 master-0 kubenswrapper[7599]: I0313 01:13:00.765915 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-catalog-content\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.784524 master-0 kubenswrapper[7599]: I0313 01:13:00.784458 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgzzr\" (UniqueName: \"kubernetes.io/projected/c6382e2a-ec14-4457-8f26-3087b19d1e1a-kube-api-access-pgzzr\") pod \"redhat-operators-t88cc\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:00.948196 master-0 kubenswrapper[7599]: I0313 01:13:00.948042 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:01.003190 master-0 kubenswrapper[7599]: I0313 01:13:01.002266 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d368174-c659-444e-ba28-8fa267c0eda6" path="/var/lib/kubelet/pods/2d368174-c659-444e-ba28-8fa267c0eda6/volumes" Mar 13 01:13:01.092756 master-0 kubenswrapper[7599]: I0313 01:13:01.092641 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:13:01.092756 master-0 kubenswrapper[7599]: I0313 01:13:01.092708 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:13:01.096550 master-0 kubenswrapper[7599]: I0313 01:13:01.093549 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerStarted","Data":"39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87"} Mar 13 01:13:01.096550 master-0 kubenswrapper[7599]: I0313 01:13:01.093593 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerStarted","Data":"731c43764bf6ca60ccb49818767715764d1313ac8e97ad985509652329db44a1"} Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: I0313 01:13:01.117040 7599 patch_prober.go:28] interesting pod/apiserver-7dbfb86fbb-mc7xz container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]log ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]etcd ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/start-apiserver-admission-initializer ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/generic-apiserver-start-informers ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/max-in-flight-filter ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/storage-object-count-tracker-hook ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/image.openshift.io-apiserver-caches ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/project.openshift.io-projectcache ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/openshift.io-startinformers ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/openshift.io-restmapperupdater ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Mar 13 01:13:01.117122 master-0 kubenswrapper[7599]: livez check failed Mar 13 01:13:01.117991 master-0 kubenswrapper[7599]: I0313 01:13:01.117136 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" podUID="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:13:01.606335 master-0 kubenswrapper[7599]: I0313 01:13:01.603476 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-xnmjr"] Mar 13 01:13:01.606335 master-0 kubenswrapper[7599]: I0313 01:13:01.604469 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.619725 master-0 kubenswrapper[7599]: I0313 01:13:01.619024 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnmjr"] Mar 13 01:13:01.704049 master-0 kubenswrapper[7599]: I0313 01:13:01.703993 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-utilities\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.704208 master-0 kubenswrapper[7599]: I0313 01:13:01.704066 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-catalog-content\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.704208 master-0 kubenswrapper[7599]: I0313 01:13:01.704156 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw474\" (UniqueName: \"kubernetes.io/projected/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-kube-api-access-zw474\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.805948 master-0 kubenswrapper[7599]: I0313 01:13:01.805703 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zw474\" (UniqueName: \"kubernetes.io/projected/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-kube-api-access-zw474\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.805948 master-0 kubenswrapper[7599]: I0313 01:13:01.805790 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-utilities\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.806185 master-0 kubenswrapper[7599]: I0313 01:13:01.806038 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-catalog-content\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.807081 master-0 kubenswrapper[7599]: I0313 01:13:01.806734 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-utilities\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.807081 master-0 kubenswrapper[7599]: I0313 01:13:01.806794 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-catalog-content\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.822911 master-0 kubenswrapper[7599]: I0313 01:13:01.822872 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zw474\" (UniqueName: \"kubernetes.io/projected/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-kube-api-access-zw474\") pod \"certified-operators-xnmjr\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:01.916720 master-0 kubenswrapper[7599]: I0313 01:13:01.916637 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-t88cc"] Mar 13 01:13:01.937617 master-0 kubenswrapper[7599]: W0313 01:13:01.937552 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6382e2a_ec14_4457_8f26_3087b19d1e1a.slice/crio-7137ef449e547dd401ee27f3f443a2af47f35fca54a0b207f6e8c71de0c42b56 WatchSource:0}: Error finding container 7137ef449e547dd401ee27f3f443a2af47f35fca54a0b207f6e8c71de0c42b56: Status 404 returned error can't find the container with id 7137ef449e547dd401ee27f3f443a2af47f35fca54a0b207f6e8c71de0c42b56 Mar 13 01:13:01.937923 master-0 kubenswrapper[7599]: I0313 01:13:01.937879 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:02.113043 master-0 kubenswrapper[7599]: I0313 01:13:02.112880 7599 generic.go:334] "Generic (PLEG): container finished" podID="536a2de1-e13c-47d1-b61d-88e0a5fd2851" containerID="9403cb28b6d645239098a1a9ce49ec1906fc26f7e015e1b08e21da092fbdcce4" exitCode=0 Mar 13 01:13:02.113043 master-0 kubenswrapper[7599]: I0313 01:13:02.112946 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" event={"ID":"536a2de1-e13c-47d1-b61d-88e0a5fd2851","Type":"ContainerDied","Data":"9403cb28b6d645239098a1a9ce49ec1906fc26f7e015e1b08e21da092fbdcce4"} Mar 13 01:13:02.115705 master-0 kubenswrapper[7599]: I0313 01:13:02.115603 7599 generic.go:334] "Generic (PLEG): container finished" podID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerID="39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87" exitCode=0 Mar 13 01:13:02.115705 master-0 kubenswrapper[7599]: I0313 01:13:02.115670 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerDied","Data":"39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87"} Mar 13 01:13:02.117607 master-0 kubenswrapper[7599]: I0313 01:13:02.117566 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" event={"ID":"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1","Type":"ContainerStarted","Data":"4f14fab0dbb3eda2a307a2d270febfa72f62097bfd703e6c81d2be48ab7a51a0"} Mar 13 01:13:02.117702 master-0 kubenswrapper[7599]: I0313 01:13:02.117619 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" event={"ID":"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1","Type":"ContainerStarted","Data":"1ea0ea4e5eed6b85ccc36c4c8c0dc8b3b9419340ae19c9233bb9409a6a59c6b0"} Mar 13 01:13:02.152188 master-0 kubenswrapper[7599]: I0313 01:13:02.152125 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerStarted","Data":"e1fcc52d488ce48143ce55b0912ced806f3b7c7c5405ad16801b3c8761538abc"} Mar 13 01:13:02.152338 master-0 kubenswrapper[7599]: I0313 01:13:02.152207 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerStarted","Data":"7137ef449e547dd401ee27f3f443a2af47f35fca54a0b207f6e8c71de0c42b56"} Mar 13 01:13:02.152876 master-0 kubenswrapper[7599]: I0313 01:13:02.152819 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" podStartSLOduration=2.152796312 podStartE2EDuration="2.152796312s" podCreationTimestamp="2026-03-13 01:13:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:13:02.150597562 +0000 UTC m=+41.422276956" watchObservedRunningTime="2026-03-13 01:13:02.152796312 +0000 UTC m=+41.424475706" Mar 13 01:13:02.359576 master-0 kubenswrapper[7599]: I0313 01:13:02.357982 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-xnmjr"] Mar 13 01:13:02.371661 master-0 kubenswrapper[7599]: W0313 01:13:02.367725 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod39bfb7e2_d1a8_4791_a52e_72f2b4790f96.slice/crio-16702ae6bf55253a1d4eab890d7c44c135c95ffb1a9130b6d582c2d745d25c4a WatchSource:0}: Error finding container 16702ae6bf55253a1d4eab890d7c44c135c95ffb1a9130b6d582c2d745d25c4a: Status 404 returned error can't find the container with id 16702ae6bf55253a1d4eab890d7c44c135c95ffb1a9130b6d582c2d745d25c4a Mar 13 01:13:03.168003 master-0 kubenswrapper[7599]: I0313 01:13:03.167928 7599 generic.go:334] "Generic (PLEG): container finished" podID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerID="e1fcc52d488ce48143ce55b0912ced806f3b7c7c5405ad16801b3c8761538abc" exitCode=0 Mar 13 01:13:03.168732 master-0 kubenswrapper[7599]: I0313 01:13:03.168029 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerDied","Data":"e1fcc52d488ce48143ce55b0912ced806f3b7c7c5405ad16801b3c8761538abc"} Mar 13 01:13:03.170812 master-0 kubenswrapper[7599]: I0313 01:13:03.170777 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" event={"ID":"536a2de1-e13c-47d1-b61d-88e0a5fd2851","Type":"ContainerStarted","Data":"5352df0ba5212d71e21d3240783e5bf999e7476a9c6489d3f14f0fc6667ee06f"} Mar 13 01:13:03.173323 master-0 kubenswrapper[7599]: I0313 01:13:03.173287 7599 generic.go:334] "Generic (PLEG): container finished" podID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerID="abf065e579740424bc4601bcbfcedea8ca832288e848753af66ad4e44ef4bf9f" exitCode=0 Mar 13 01:13:03.173386 master-0 kubenswrapper[7599]: I0313 01:13:03.173326 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnmjr" event={"ID":"39bfb7e2-d1a8-4791-a52e-72f2b4790f96","Type":"ContainerDied","Data":"abf065e579740424bc4601bcbfcedea8ca832288e848753af66ad4e44ef4bf9f"} Mar 13 01:13:03.173386 master-0 kubenswrapper[7599]: I0313 01:13:03.173347 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnmjr" event={"ID":"39bfb7e2-d1a8-4791-a52e-72f2b4790f96","Type":"ContainerStarted","Data":"16702ae6bf55253a1d4eab890d7c44c135c95ffb1a9130b6d582c2d745d25c4a"} Mar 13 01:13:03.195216 master-0 kubenswrapper[7599]: I0313 01:13:03.193357 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" podStartSLOduration=8.515802022 podStartE2EDuration="13.193337298s" podCreationTimestamp="2026-03-13 01:12:50 +0000 UTC" firstStartedPulling="2026-03-13 01:12:56.854893993 +0000 UTC m=+36.126573387" lastFinishedPulling="2026-03-13 01:13:01.532429259 +0000 UTC m=+40.804108663" observedRunningTime="2026-03-13 01:13:03.192176681 +0000 UTC m=+42.463856075" watchObservedRunningTime="2026-03-13 01:13:03.193337298 +0000 UTC m=+42.465016692" Mar 13 01:13:03.528376 master-0 kubenswrapper[7599]: I0313 01:13:03.528327 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 01:13:03.528633 master-0 kubenswrapper[7599]: I0313 01:13:03.528579 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-2-master-0" podUID="77fb9b0a-8127-4594-99ae-98f9000d5cc4" containerName="installer" containerID="cri-o://5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a" gracePeriod=30 Mar 13 01:13:03.810255 master-0 kubenswrapper[7599]: I0313 01:13:03.810136 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 01:13:03.810791 master-0 kubenswrapper[7599]: I0313 01:13:03.810766 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:03.815688 master-0 kubenswrapper[7599]: I0313 01:13:03.815651 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 01:13:03.823880 master-0 kubenswrapper[7599]: I0313 01:13:03.823844 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 01:13:03.961805 master-0 kubenswrapper[7599]: I0313 01:13:03.961660 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94588bf1-f4cd-4446-999e-0039539e65a5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:03.962183 master-0 kubenswrapper[7599]: I0313 01:13:03.961912 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-var-lock\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:03.962183 master-0 kubenswrapper[7599]: I0313 01:13:03.962063 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:03.993372 master-0 kubenswrapper[7599]: I0313 01:13:03.993328 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_77fb9b0a-8127-4594-99ae-98f9000d5cc4/installer/0.log" Mar 13 01:13:03.993633 master-0 kubenswrapper[7599]: I0313 01:13:03.993476 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:13:04.063916 master-0 kubenswrapper[7599]: I0313 01:13:04.063771 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kubelet-dir\") pod \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " Mar 13 01:13:04.064373 master-0 kubenswrapper[7599]: I0313 01:13:04.063923 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kube-api-access\") pod \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " Mar 13 01:13:04.064373 master-0 kubenswrapper[7599]: I0313 01:13:04.064004 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-var-lock\") pod \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\" (UID: \"77fb9b0a-8127-4594-99ae-98f9000d5cc4\") " Mar 13 01:13:04.064373 master-0 kubenswrapper[7599]: I0313 01:13:04.064229 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "77fb9b0a-8127-4594-99ae-98f9000d5cc4" (UID: "77fb9b0a-8127-4594-99ae-98f9000d5cc4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:04.064496 master-0 kubenswrapper[7599]: I0313 01:13:04.064345 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-var-lock" (OuterVolumeSpecName: "var-lock") pod "77fb9b0a-8127-4594-99ae-98f9000d5cc4" (UID: "77fb9b0a-8127-4594-99ae-98f9000d5cc4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:04.064496 master-0 kubenswrapper[7599]: I0313 01:13:04.064412 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-var-lock\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:04.065032 master-0 kubenswrapper[7599]: I0313 01:13:04.064560 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-var-lock\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:04.065032 master-0 kubenswrapper[7599]: I0313 01:13:04.064742 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:04.065032 master-0 kubenswrapper[7599]: I0313 01:13:04.064892 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94588bf1-f4cd-4446-999e-0039539e65a5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:04.065032 master-0 kubenswrapper[7599]: I0313 01:13:04.064899 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:04.065032 master-0 kubenswrapper[7599]: I0313 01:13:04.064963 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:04.065032 master-0 kubenswrapper[7599]: I0313 01:13:04.064977 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:04.067336 master-0 kubenswrapper[7599]: I0313 01:13:04.067281 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "77fb9b0a-8127-4594-99ae-98f9000d5cc4" (UID: "77fb9b0a-8127-4594-99ae-98f9000d5cc4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:04.080714 master-0 kubenswrapper[7599]: I0313 01:13:04.080677 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94588bf1-f4cd-4446-999e-0039539e65a5-kube-api-access\") pod \"installer-1-master-0\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:04.140745 master-0 kubenswrapper[7599]: I0313 01:13:04.140685 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:04.165603 master-0 kubenswrapper[7599]: I0313 01:13:04.165542 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/77fb9b0a-8127-4594-99ae-98f9000d5cc4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:04.185364 master-0 kubenswrapper[7599]: I0313 01:13:04.185325 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-2-master-0_77fb9b0a-8127-4594-99ae-98f9000d5cc4/installer/0.log" Mar 13 01:13:04.185751 master-0 kubenswrapper[7599]: I0313 01:13:04.185382 7599 generic.go:334] "Generic (PLEG): container finished" podID="77fb9b0a-8127-4594-99ae-98f9000d5cc4" containerID="5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a" exitCode=1 Mar 13 01:13:04.185883 master-0 kubenswrapper[7599]: I0313 01:13:04.185770 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-2-master-0" Mar 13 01:13:04.186261 master-0 kubenswrapper[7599]: I0313 01:13:04.186001 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"77fb9b0a-8127-4594-99ae-98f9000d5cc4","Type":"ContainerDied","Data":"5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a"} Mar 13 01:13:04.186261 master-0 kubenswrapper[7599]: I0313 01:13:04.186068 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-2-master-0" event={"ID":"77fb9b0a-8127-4594-99ae-98f9000d5cc4","Type":"ContainerDied","Data":"4821123f75c4f98aac0c63ff5b60993794c88b10ede5c55baa01018f5c66cd39"} Mar 13 01:13:04.186261 master-0 kubenswrapper[7599]: I0313 01:13:04.186095 7599 scope.go:117] "RemoveContainer" containerID="5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a" Mar 13 01:13:04.211688 master-0 kubenswrapper[7599]: I0313 01:13:04.211298 7599 scope.go:117] "RemoveContainer" containerID="5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a" Mar 13 01:13:04.212365 master-0 kubenswrapper[7599]: E0313 01:13:04.212300 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a\": container with ID starting with 5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a not found: ID does not exist" containerID="5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a" Mar 13 01:13:04.212365 master-0 kubenswrapper[7599]: I0313 01:13:04.212339 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a"} err="failed to get container status \"5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a\": rpc error: code = NotFound desc = could not find container \"5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a\": container with ID starting with 5f8af011d0528af1b58e5bde834805007b4c44c6bd79f26dce39cee3ca8faf0a not found: ID does not exist" Mar 13 01:13:04.226460 master-0 kubenswrapper[7599]: I0313 01:13:04.226420 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 01:13:04.232777 master-0 kubenswrapper[7599]: I0313 01:13:04.232736 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-2-master-0"] Mar 13 01:13:04.274560 master-0 kubenswrapper[7599]: I0313 01:13:04.270282 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") pod \"controller-manager-7f67fd7ddc-fvj8p\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:13:04.274560 master-0 kubenswrapper[7599]: I0313 01:13:04.270368 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") pod \"route-controller-manager-7974bfc85-x789h\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:13:04.274560 master-0 kubenswrapper[7599]: E0313 01:13:04.270526 7599 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:13:04.274560 master-0 kubenswrapper[7599]: E0313 01:13:04.270602 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca podName:37138c19-447a-4476-b108-08998f3a0f54 nodeName:}" failed. No retries permitted until 2026-03-13 01:13:20.270578142 +0000 UTC m=+59.542257546 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca") pod "route-controller-manager-7974bfc85-x789h" (UID: "37138c19-447a-4476-b108-08998f3a0f54") : configmap "client-ca" not found Mar 13 01:13:04.274560 master-0 kubenswrapper[7599]: E0313 01:13:04.271474 7599 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: configmap "client-ca" not found Mar 13 01:13:04.274560 master-0 kubenswrapper[7599]: E0313 01:13:04.271531 7599 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca podName:8dc25b28-3de0-472d-afe3-198a83f112c1 nodeName:}" failed. No retries permitted until 2026-03-13 01:13:20.271499232 +0000 UTC m=+59.543178626 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca") pod "controller-manager-7f67fd7ddc-fvj8p" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1") : configmap "client-ca" not found Mar 13 01:13:04.830895 master-0 kubenswrapper[7599]: I0313 01:13:04.830699 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 01:13:04.846410 master-0 kubenswrapper[7599]: I0313 01:13:04.846340 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p"] Mar 13 01:13:04.846886 master-0 kubenswrapper[7599]: E0313 01:13:04.846853 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" podUID="8dc25b28-3de0-472d-afe3-198a83f112c1" Mar 13 01:13:04.893197 master-0 kubenswrapper[7599]: I0313 01:13:04.892316 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h"] Mar 13 01:13:04.893197 master-0 kubenswrapper[7599]: E0313 01:13:04.893089 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[client-ca], unattached volumes=[], failed to process volumes=[]: context canceled" pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" podUID="37138c19-447a-4476-b108-08998f3a0f54" Mar 13 01:13:05.053739 master-0 kubenswrapper[7599]: I0313 01:13:05.042178 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="77fb9b0a-8127-4594-99ae-98f9000d5cc4" path="/var/lib/kubelet/pods/77fb9b0a-8127-4594-99ae-98f9000d5cc4/volumes" Mar 13 01:13:05.223605 master-0 kubenswrapper[7599]: I0313 01:13:05.223255 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:13:05.224228 master-0 kubenswrapper[7599]: I0313 01:13:05.223754 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"94588bf1-f4cd-4446-999e-0039539e65a5","Type":"ContainerStarted","Data":"5244f7095c3f678f82891d0b5312367cb0c23c63204c4e8de4031d103c9168b7"} Mar 13 01:13:05.224228 master-0 kubenswrapper[7599]: I0313 01:13:05.223796 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:13:05.236071 master-0 kubenswrapper[7599]: I0313 01:13:05.236039 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:13:05.255472 master-0 kubenswrapper[7599]: I0313 01:13:05.255340 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:13:05.406097 master-0 kubenswrapper[7599]: I0313 01:13:05.405949 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-967kg\" (UniqueName: \"kubernetes.io/projected/8dc25b28-3de0-472d-afe3-198a83f112c1-kube-api-access-967kg\") pod \"8dc25b28-3de0-472d-afe3-198a83f112c1\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " Mar 13 01:13:05.406097 master-0 kubenswrapper[7599]: I0313 01:13:05.406042 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-config\") pod \"8dc25b28-3de0-472d-afe3-198a83f112c1\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " Mar 13 01:13:05.406097 master-0 kubenswrapper[7599]: I0313 01:13:05.406095 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-proxy-ca-bundles\") pod \"8dc25b28-3de0-472d-afe3-198a83f112c1\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " Mar 13 01:13:05.406365 master-0 kubenswrapper[7599]: I0313 01:13:05.406166 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-config\") pod \"37138c19-447a-4476-b108-08998f3a0f54\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " Mar 13 01:13:05.406365 master-0 kubenswrapper[7599]: I0313 01:13:05.406211 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v5szm\" (UniqueName: \"kubernetes.io/projected/37138c19-447a-4476-b108-08998f3a0f54-kube-api-access-v5szm\") pod \"37138c19-447a-4476-b108-08998f3a0f54\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " Mar 13 01:13:05.406365 master-0 kubenswrapper[7599]: I0313 01:13:05.406236 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc25b28-3de0-472d-afe3-198a83f112c1-serving-cert\") pod \"8dc25b28-3de0-472d-afe3-198a83f112c1\" (UID: \"8dc25b28-3de0-472d-afe3-198a83f112c1\") " Mar 13 01:13:05.406365 master-0 kubenswrapper[7599]: I0313 01:13:05.406257 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37138c19-447a-4476-b108-08998f3a0f54-serving-cert\") pod \"37138c19-447a-4476-b108-08998f3a0f54\" (UID: \"37138c19-447a-4476-b108-08998f3a0f54\") " Mar 13 01:13:05.407406 master-0 kubenswrapper[7599]: I0313 01:13:05.407361 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-config" (OuterVolumeSpecName: "config") pod "37138c19-447a-4476-b108-08998f3a0f54" (UID: "37138c19-447a-4476-b108-08998f3a0f54"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:05.407619 master-0 kubenswrapper[7599]: I0313 01:13:05.407580 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-config" (OuterVolumeSpecName: "config") pod "8dc25b28-3de0-472d-afe3-198a83f112c1" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:05.408623 master-0 kubenswrapper[7599]: I0313 01:13:05.408570 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8dc25b28-3de0-472d-afe3-198a83f112c1" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:05.412287 master-0 kubenswrapper[7599]: I0313 01:13:05.412249 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37138c19-447a-4476-b108-08998f3a0f54-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37138c19-447a-4476-b108-08998f3a0f54" (UID: "37138c19-447a-4476-b108-08998f3a0f54"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:13:05.413460 master-0 kubenswrapper[7599]: I0313 01:13:05.413248 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8dc25b28-3de0-472d-afe3-198a83f112c1-kube-api-access-967kg" (OuterVolumeSpecName: "kube-api-access-967kg") pod "8dc25b28-3de0-472d-afe3-198a83f112c1" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1"). InnerVolumeSpecName "kube-api-access-967kg". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:05.427012 master-0 kubenswrapper[7599]: I0313 01:13:05.426901 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8dc25b28-3de0-472d-afe3-198a83f112c1-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8dc25b28-3de0-472d-afe3-198a83f112c1" (UID: "8dc25b28-3de0-472d-afe3-198a83f112c1"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:13:05.430208 master-0 kubenswrapper[7599]: I0313 01:13:05.430153 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37138c19-447a-4476-b108-08998f3a0f54-kube-api-access-v5szm" (OuterVolumeSpecName: "kube-api-access-v5szm") pod "37138c19-447a-4476-b108-08998f3a0f54" (UID: "37138c19-447a-4476-b108-08998f3a0f54"). InnerVolumeSpecName "kube-api-access-v5szm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:05.507707 master-0 kubenswrapper[7599]: I0313 01:13:05.507497 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:05.507707 master-0 kubenswrapper[7599]: I0313 01:13:05.507701 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v5szm\" (UniqueName: \"kubernetes.io/projected/37138c19-447a-4476-b108-08998f3a0f54-kube-api-access-v5szm\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:05.507707 master-0 kubenswrapper[7599]: I0313 01:13:05.507713 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8dc25b28-3de0-472d-afe3-198a83f112c1-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:05.507707 master-0 kubenswrapper[7599]: I0313 01:13:05.507723 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37138c19-447a-4476-b108-08998f3a0f54-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:05.507707 master-0 kubenswrapper[7599]: I0313 01:13:05.507732 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-967kg\" (UniqueName: \"kubernetes.io/projected/8dc25b28-3de0-472d-afe3-198a83f112c1-kube-api-access-967kg\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:05.508023 master-0 kubenswrapper[7599]: I0313 01:13:05.507741 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:05.508023 master-0 kubenswrapper[7599]: I0313 01:13:05.507750 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:05.612207 master-0 kubenswrapper[7599]: I0313 01:13:05.612158 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:13:05.612207 master-0 kubenswrapper[7599]: I0313 01:13:05.612220 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:13:05.620231 master-0 kubenswrapper[7599]: I0313 01:13:05.620173 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:13:06.150664 master-0 kubenswrapper[7599]: I0313 01:13:06.150377 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:13:06.157014 master-0 kubenswrapper[7599]: I0313 01:13:06.156813 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 01:13:06.157825 master-0 kubenswrapper[7599]: E0313 01:13:06.157068 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="77fb9b0a-8127-4594-99ae-98f9000d5cc4" containerName="installer" Mar 13 01:13:06.157825 master-0 kubenswrapper[7599]: I0313 01:13:06.157083 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="77fb9b0a-8127-4594-99ae-98f9000d5cc4" containerName="installer" Mar 13 01:13:06.157825 master-0 kubenswrapper[7599]: I0313 01:13:06.157198 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="77fb9b0a-8127-4594-99ae-98f9000d5cc4" containerName="installer" Mar 13 01:13:06.157825 master-0 kubenswrapper[7599]: I0313 01:13:06.157627 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.159801 master-0 kubenswrapper[7599]: I0313 01:13:06.157985 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:13:06.166774 master-0 kubenswrapper[7599]: I0313 01:13:06.166699 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 01:13:06.233804 master-0 kubenswrapper[7599]: I0313 01:13:06.230791 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"94588bf1-f4cd-4446-999e-0039539e65a5","Type":"ContainerStarted","Data":"a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046"} Mar 13 01:13:06.233804 master-0 kubenswrapper[7599]: I0313 01:13:06.233177 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p" Mar 13 01:13:06.234772 master-0 kubenswrapper[7599]: I0313 01:13:06.234431 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h" Mar 13 01:13:06.245425 master-0 kubenswrapper[7599]: I0313 01:13:06.245368 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a39cf00-835b-4dfc-9455-71aa8f509347-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.245651 master-0 kubenswrapper[7599]: I0313 01:13:06.245534 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.245651 master-0 kubenswrapper[7599]: I0313 01:13:06.245589 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:13:06.246243 master-0 kubenswrapper[7599]: I0313 01:13:06.246190 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-var-lock\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.350106 master-0 kubenswrapper[7599]: I0313 01:13:06.350027 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-var-lock\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.350106 master-0 kubenswrapper[7599]: I0313 01:13:06.350108 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a39cf00-835b-4dfc-9455-71aa8f509347-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.350454 master-0 kubenswrapper[7599]: I0313 01:13:06.350134 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.350454 master-0 kubenswrapper[7599]: I0313 01:13:06.350254 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.350454 master-0 kubenswrapper[7599]: I0313 01:13:06.350303 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-var-lock\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.371449 master-0 kubenswrapper[7599]: I0313 01:13:06.367953 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-1-master-0" podStartSLOduration=3.367926131 podStartE2EDuration="3.367926131s" podCreationTimestamp="2026-03-13 01:13:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:13:06.328704171 +0000 UTC m=+45.600383585" watchObservedRunningTime="2026-03-13 01:13:06.367926131 +0000 UTC m=+45.639605535" Mar 13 01:13:06.376936 master-0 kubenswrapper[7599]: I0313 01:13:06.372922 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a39cf00-835b-4dfc-9455-71aa8f509347-kube-api-access\") pod \"installer-3-master-0\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.385380 master-0 kubenswrapper[7599]: I0313 01:13:06.382253 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7"] Mar 13 01:13:06.385380 master-0 kubenswrapper[7599]: I0313 01:13:06.383008 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.386320 master-0 kubenswrapper[7599]: I0313 01:13:06.385985 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h"] Mar 13 01:13:06.395188 master-0 kubenswrapper[7599]: I0313 01:13:06.391159 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 01:13:06.395188 master-0 kubenswrapper[7599]: I0313 01:13:06.392395 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 01:13:06.395188 master-0 kubenswrapper[7599]: I0313 01:13:06.393491 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 01:13:06.399765 master-0 kubenswrapper[7599]: I0313 01:13:06.399476 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 01:13:06.399765 master-0 kubenswrapper[7599]: I0313 01:13:06.399609 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 01:13:06.400869 master-0 kubenswrapper[7599]: I0313 01:13:06.400779 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7974bfc85-x789h"] Mar 13 01:13:06.444353 master-0 kubenswrapper[7599]: I0313 01:13:06.442124 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7"] Mar 13 01:13:06.456326 master-0 kubenswrapper[7599]: I0313 01:13:06.456190 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-config\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.456326 master-0 kubenswrapper[7599]: I0313 01:13:06.456260 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-client-ca\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.456326 master-0 kubenswrapper[7599]: I0313 01:13:06.456320 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxn8\" (UniqueName: \"kubernetes.io/projected/d3a666ab-7b35-463e-b5fa-ecaa147296e8-kube-api-access-nlxn8\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.456724 master-0 kubenswrapper[7599]: I0313 01:13:06.456461 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a666ab-7b35-463e-b5fa-ecaa147296e8-serving-cert\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.456724 master-0 kubenswrapper[7599]: I0313 01:13:06.456532 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37138c19-447a-4476-b108-08998f3a0f54-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:06.480738 master-0 kubenswrapper[7599]: I0313 01:13:06.478682 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:06.506069 master-0 kubenswrapper[7599]: I0313 01:13:06.505976 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p"] Mar 13 01:13:06.510688 master-0 kubenswrapper[7599]: I0313 01:13:06.510655 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f67fd7ddc-fvj8p"] Mar 13 01:13:06.559664 master-0 kubenswrapper[7599]: I0313 01:13:06.559617 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nlxn8\" (UniqueName: \"kubernetes.io/projected/d3a666ab-7b35-463e-b5fa-ecaa147296e8-kube-api-access-nlxn8\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.559850 master-0 kubenswrapper[7599]: I0313 01:13:06.559702 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a666ab-7b35-463e-b5fa-ecaa147296e8-serving-cert\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.559914 master-0 kubenswrapper[7599]: I0313 01:13:06.559820 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-config\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.560183 master-0 kubenswrapper[7599]: I0313 01:13:06.559957 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-client-ca\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.560183 master-0 kubenswrapper[7599]: I0313 01:13:06.560135 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8dc25b28-3de0-472d-afe3-198a83f112c1-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:06.561925 master-0 kubenswrapper[7599]: I0313 01:13:06.561887 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-client-ca\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.562456 master-0 kubenswrapper[7599]: I0313 01:13:06.562418 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-config\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.577589 master-0 kubenswrapper[7599]: I0313 01:13:06.572495 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a666ab-7b35-463e-b5fa-ecaa147296e8-serving-cert\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.609064 master-0 kubenswrapper[7599]: I0313 01:13:06.604888 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nlxn8\" (UniqueName: \"kubernetes.io/projected/d3a666ab-7b35-463e-b5fa-ecaa147296e8-kube-api-access-nlxn8\") pod \"route-controller-manager-748966cb9f-wnsx7\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.747623 master-0 kubenswrapper[7599]: I0313 01:13:06.745880 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:06.775211 master-0 kubenswrapper[7599]: I0313 01:13:06.775159 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:13:06.945951 master-0 kubenswrapper[7599]: I0313 01:13:06.944786 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:13:06.998912 master-0 kubenswrapper[7599]: I0313 01:13:06.998861 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37138c19-447a-4476-b108-08998f3a0f54" path="/var/lib/kubelet/pods/37138c19-447a-4476-b108-08998f3a0f54/volumes" Mar 13 01:13:06.999311 master-0 kubenswrapper[7599]: I0313 01:13:06.999283 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8dc25b28-3de0-472d-afe3-198a83f112c1" path="/var/lib/kubelet/pods/8dc25b28-3de0-472d-afe3-198a83f112c1/volumes" Mar 13 01:13:07.030788 master-0 kubenswrapper[7599]: I0313 01:13:07.030735 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 01:13:07.177810 master-0 kubenswrapper[7599]: I0313 01:13:07.176561 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7"] Mar 13 01:13:07.185376 master-0 kubenswrapper[7599]: W0313 01:13:07.185315 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3a666ab_7b35_463e_b5fa_ecaa147296e8.slice/crio-d926842e3adb53b4cd63fe95b774afe59513b6565439305b9dd8b6b4b8718e8b WatchSource:0}: Error finding container d926842e3adb53b4cd63fe95b774afe59513b6565439305b9dd8b6b4b8718e8b: Status 404 returned error can't find the container with id d926842e3adb53b4cd63fe95b774afe59513b6565439305b9dd8b6b4b8718e8b Mar 13 01:13:07.247494 master-0 kubenswrapper[7599]: I0313 01:13:07.247440 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"2a39cf00-835b-4dfc-9455-71aa8f509347","Type":"ContainerStarted","Data":"ad6c20f954ef6f52eaa154679c9ef06260294d3a3abe7a17a117f355c17b2bb2"} Mar 13 01:13:07.249412 master-0 kubenswrapper[7599]: I0313 01:13:07.249325 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" event={"ID":"d3a666ab-7b35-463e-b5fa-ecaa147296e8","Type":"ContainerStarted","Data":"d926842e3adb53b4cd63fe95b774afe59513b6565439305b9dd8b6b4b8718e8b"} Mar 13 01:13:07.562890 master-0 kubenswrapper[7599]: I0313 01:13:07.562821 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pfsjd" Mar 13 01:13:08.258652 master-0 kubenswrapper[7599]: I0313 01:13:08.258584 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"2a39cf00-835b-4dfc-9455-71aa8f509347","Type":"ContainerStarted","Data":"80dda219c7bd72a8778fcc074747b2fcb68aa7675a6676f60bec319397926445"} Mar 13 01:13:09.464934 master-0 kubenswrapper[7599]: I0313 01:13:09.464184 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-3-master-0" podStartSLOduration=3.464159396 podStartE2EDuration="3.464159396s" podCreationTimestamp="2026-03-13 01:13:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:13:08.279139822 +0000 UTC m=+47.550819216" watchObservedRunningTime="2026-03-13 01:13:09.464159396 +0000 UTC m=+48.735838790" Mar 13 01:13:09.464934 master-0 kubenswrapper[7599]: I0313 01:13:09.464650 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8"] Mar 13 01:13:09.465785 master-0 kubenswrapper[7599]: I0313 01:13:09.465614 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.472600 master-0 kubenswrapper[7599]: I0313 01:13:09.469024 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:13:09.472600 master-0 kubenswrapper[7599]: I0313 01:13:09.469426 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:13:09.472600 master-0 kubenswrapper[7599]: I0313 01:13:09.469640 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:13:09.472600 master-0 kubenswrapper[7599]: I0313 01:13:09.471191 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:13:09.472600 master-0 kubenswrapper[7599]: I0313 01:13:09.471491 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:13:09.480646 master-0 kubenswrapper[7599]: I0313 01:13:09.480590 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-proxy-ca-bundles\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.480886 master-0 kubenswrapper[7599]: I0313 01:13:09.480677 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-config\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.480886 master-0 kubenswrapper[7599]: I0313 01:13:09.480732 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95849efd-fabc-4e21-82e1-a15bc6eee2ba-serving-cert\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.480886 master-0 kubenswrapper[7599]: I0313 01:13:09.480781 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-client-ca\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.480886 master-0 kubenswrapper[7599]: I0313 01:13:09.480838 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5bj2\" (UniqueName: \"kubernetes.io/projected/95849efd-fabc-4e21-82e1-a15bc6eee2ba-kube-api-access-t5bj2\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.490640 master-0 kubenswrapper[7599]: I0313 01:13:09.487954 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8"] Mar 13 01:13:09.506731 master-0 kubenswrapper[7599]: I0313 01:13:09.503904 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:13:09.582538 master-0 kubenswrapper[7599]: I0313 01:13:09.581823 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5bj2\" (UniqueName: \"kubernetes.io/projected/95849efd-fabc-4e21-82e1-a15bc6eee2ba-kube-api-access-t5bj2\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.582538 master-0 kubenswrapper[7599]: I0313 01:13:09.581943 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-proxy-ca-bundles\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.582538 master-0 kubenswrapper[7599]: I0313 01:13:09.581986 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-config\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.582538 master-0 kubenswrapper[7599]: I0313 01:13:09.582036 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95849efd-fabc-4e21-82e1-a15bc6eee2ba-serving-cert\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.582538 master-0 kubenswrapper[7599]: I0313 01:13:09.582066 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-client-ca\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.587588 master-0 kubenswrapper[7599]: I0313 01:13:09.583267 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-client-ca\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.587588 master-0 kubenswrapper[7599]: I0313 01:13:09.584449 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-config\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.587588 master-0 kubenswrapper[7599]: I0313 01:13:09.585453 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-proxy-ca-bundles\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.588206 master-0 kubenswrapper[7599]: I0313 01:13:09.588162 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95849efd-fabc-4e21-82e1-a15bc6eee2ba-serving-cert\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.644653 master-0 kubenswrapper[7599]: I0313 01:13:09.640789 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5bj2\" (UniqueName: \"kubernetes.io/projected/95849efd-fabc-4e21-82e1-a15bc6eee2ba-kube-api-access-t5bj2\") pod \"controller-manager-6d46b9fb7-t9sp8\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:09.811273 master-0 kubenswrapper[7599]: I0313 01:13:09.811208 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:12.511588 master-0 kubenswrapper[7599]: I0313 01:13:12.509486 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8"] Mar 13 01:13:13.018649 master-0 kubenswrapper[7599]: I0313 01:13:13.016948 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 01:13:13.018649 master-0 kubenswrapper[7599]: I0313 01:13:13.017624 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.027240 master-0 kubenswrapper[7599]: I0313 01:13:13.024592 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 01:13:13.027240 master-0 kubenswrapper[7599]: I0313 01:13:13.024795 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 01:13:13.035014 master-0 kubenswrapper[7599]: I0313 01:13:13.034987 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.035150 master-0 kubenswrapper[7599]: I0313 01:13:13.035136 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.035226 master-0 kubenswrapper[7599]: I0313 01:13:13.035214 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.040148 master-0 kubenswrapper[7599]: I0313 01:13:13.040107 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 01:13:13.040466 master-0 kubenswrapper[7599]: I0313 01:13:13.040403 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/installer-3-master-0" podUID="2a39cf00-835b-4dfc-9455-71aa8f509347" containerName="installer" containerID="cri-o://80dda219c7bd72a8778fcc074747b2fcb68aa7675a6676f60bec319397926445" gracePeriod=30 Mar 13 01:13:13.059545 master-0 kubenswrapper[7599]: I0313 01:13:13.057628 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6"] Mar 13 01:13:13.059545 master-0 kubenswrapper[7599]: I0313 01:13:13.058546 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.063069 master-0 kubenswrapper[7599]: I0313 01:13:13.062869 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 01:13:13.063970 master-0 kubenswrapper[7599]: I0313 01:13:13.063744 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 01:13:13.064784 master-0 kubenswrapper[7599]: I0313 01:13:13.064663 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 01:13:13.087804 master-0 kubenswrapper[7599]: I0313 01:13:13.087753 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6"] Mar 13 01:13:13.136046 master-0 kubenswrapper[7599]: I0313 01:13:13.135997 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.136046 master-0 kubenswrapper[7599]: I0313 01:13:13.136042 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.136214 master-0 kubenswrapper[7599]: I0313 01:13:13.136074 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g89p7\" (UniqueName: \"kubernetes.io/projected/56e20b21-ba17-46ae-a740-0e7bd45eae5f-kube-api-access-g89p7\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.136214 master-0 kubenswrapper[7599]: I0313 01:13:13.136108 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.136214 master-0 kubenswrapper[7599]: I0313 01:13:13.136147 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.136313 master-0 kubenswrapper[7599]: I0313 01:13:13.136237 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.136313 master-0 kubenswrapper[7599]: I0313 01:13:13.136281 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.224441 master-0 kubenswrapper[7599]: I0313 01:13:13.224250 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.237491 master-0 kubenswrapper[7599]: I0313 01:13:13.237439 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g89p7\" (UniqueName: \"kubernetes.io/projected/56e20b21-ba17-46ae-a740-0e7bd45eae5f-kube-api-access-g89p7\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.237731 master-0 kubenswrapper[7599]: I0313 01:13:13.237520 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.241837 master-0 kubenswrapper[7599]: I0313 01:13:13.241803 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.281181 master-0 kubenswrapper[7599]: I0313 01:13:13.281019 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g89p7\" (UniqueName: \"kubernetes.io/projected/56e20b21-ba17-46ae-a740-0e7bd45eae5f-kube-api-access-g89p7\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.284731 master-0 kubenswrapper[7599]: W0313 01:13:13.284667 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95849efd_fabc_4e21_82e1_a15bc6eee2ba.slice/crio-807c4facb58060fdd54ffa474fd915201f6855e041826ad6bd8e340dbc080dd4 WatchSource:0}: Error finding container 807c4facb58060fdd54ffa474fd915201f6855e041826ad6bd8e340dbc080dd4: Status 404 returned error can't find the container with id 807c4facb58060fdd54ffa474fd915201f6855e041826ad6bd8e340dbc080dd4 Mar 13 01:13:13.362704 master-0 kubenswrapper[7599]: I0313 01:13:13.362380 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:13:13.447640 master-0 kubenswrapper[7599]: I0313 01:13:13.447132 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:13:13.701013 master-0 kubenswrapper[7599]: I0313 01:13:13.700844 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" event={"ID":"95849efd-fabc-4e21-82e1-a15bc6eee2ba","Type":"ContainerStarted","Data":"807c4facb58060fdd54ffa474fd915201f6855e041826ad6bd8e340dbc080dd4"} Mar 13 01:13:13.714696 master-0 kubenswrapper[7599]: I0313 01:13:13.713374 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_2a39cf00-835b-4dfc-9455-71aa8f509347/installer/0.log" Mar 13 01:13:13.714696 master-0 kubenswrapper[7599]: I0313 01:13:13.713430 7599 generic.go:334] "Generic (PLEG): container finished" podID="2a39cf00-835b-4dfc-9455-71aa8f509347" containerID="80dda219c7bd72a8778fcc074747b2fcb68aa7675a6676f60bec319397926445" exitCode=1 Mar 13 01:13:13.714696 master-0 kubenswrapper[7599]: I0313 01:13:13.713466 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"2a39cf00-835b-4dfc-9455-71aa8f509347","Type":"ContainerDied","Data":"80dda219c7bd72a8778fcc074747b2fcb68aa7675a6676f60bec319397926445"} Mar 13 01:13:15.069428 master-0 kubenswrapper[7599]: I0313 01:13:15.068891 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 01:13:15.071725 master-0 kubenswrapper[7599]: I0313 01:13:15.071678 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.075558 master-0 kubenswrapper[7599]: I0313 01:13:15.075193 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 01:13:15.178177 master-0 kubenswrapper[7599]: I0313 01:13:15.178081 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.178177 master-0 kubenswrapper[7599]: I0313 01:13:15.178168 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.178493 master-0 kubenswrapper[7599]: I0313 01:13:15.178223 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.288689 master-0 kubenswrapper[7599]: I0313 01:13:15.287504 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.288689 master-0 kubenswrapper[7599]: I0313 01:13:15.287598 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.288689 master-0 kubenswrapper[7599]: I0313 01:13:15.287635 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.288689 master-0 kubenswrapper[7599]: I0313 01:13:15.287736 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.288689 master-0 kubenswrapper[7599]: I0313 01:13:15.287793 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.319538 master-0 kubenswrapper[7599]: I0313 01:13:15.319373 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:15.433440 master-0 kubenswrapper[7599]: I0313 01:13:15.433372 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:13:16.934103 master-0 kubenswrapper[7599]: I0313 01:13:16.934008 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 01:13:16.935019 master-0 kubenswrapper[7599]: I0313 01:13:16.934247 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-1-master-0" podUID="94588bf1-f4cd-4446-999e-0039539e65a5" containerName="installer" containerID="cri-o://a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046" gracePeriod=30 Mar 13 01:13:18.750534 master-0 kubenswrapper[7599]: I0313 01:13:18.750451 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_6c32e816-aa69-4e9c-9fbf-56595c764f3b/installer/0.log" Mar 13 01:13:18.750534 master-0 kubenswrapper[7599]: I0313 01:13:18.750541 7599 generic.go:334] "Generic (PLEG): container finished" podID="6c32e816-aa69-4e9c-9fbf-56595c764f3b" containerID="17c0598fb82fc85207d161703480300077fafb1372eee649f6385e8290aca19a" exitCode=1 Mar 13 01:13:18.751316 master-0 kubenswrapper[7599]: I0313 01:13:18.750589 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"6c32e816-aa69-4e9c-9fbf-56595c764f3b","Type":"ContainerDied","Data":"17c0598fb82fc85207d161703480300077fafb1372eee649f6385e8290aca19a"} Mar 13 01:13:20.295945 master-0 kubenswrapper[7599]: I0313 01:13:20.295825 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn"] Mar 13 01:13:20.296940 master-0 kubenswrapper[7599]: I0313 01:13:20.296726 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.302744 master-0 kubenswrapper[7599]: I0313 01:13:20.302585 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 01:13:20.302744 master-0 kubenswrapper[7599]: I0313 01:13:20.302614 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 01:13:20.302744 master-0 kubenswrapper[7599]: I0313 01:13:20.302740 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 01:13:20.303023 master-0 kubenswrapper[7599]: I0313 01:13:20.302845 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 01:13:20.303023 master-0 kubenswrapper[7599]: I0313 01:13:20.302944 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 01:13:20.471075 master-0 kubenswrapper[7599]: I0313 01:13:20.471020 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eec92350-c2e5-4223-82fe-2c3f78c7945f-machine-approver-tls\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.471358 master-0 kubenswrapper[7599]: I0313 01:13:20.471114 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-auth-proxy-config\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.471358 master-0 kubenswrapper[7599]: I0313 01:13:20.471148 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-config\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.471358 master-0 kubenswrapper[7599]: I0313 01:13:20.471188 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9jfk\" (UniqueName: \"kubernetes.io/projected/eec92350-c2e5-4223-82fe-2c3f78c7945f-kube-api-access-f9jfk\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.573246 master-0 kubenswrapper[7599]: I0313 01:13:20.573077 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-auth-proxy-config\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.573246 master-0 kubenswrapper[7599]: I0313 01:13:20.573158 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-config\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.574042 master-0 kubenswrapper[7599]: I0313 01:13:20.573683 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f9jfk\" (UniqueName: \"kubernetes.io/projected/eec92350-c2e5-4223-82fe-2c3f78c7945f-kube-api-access-f9jfk\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.574042 master-0 kubenswrapper[7599]: I0313 01:13:20.573891 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eec92350-c2e5-4223-82fe-2c3f78c7945f-machine-approver-tls\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.574335 master-0 kubenswrapper[7599]: I0313 01:13:20.574293 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-config\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.576435 master-0 kubenswrapper[7599]: I0313 01:13:20.576391 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-auth-proxy-config\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.591869 master-0 kubenswrapper[7599]: I0313 01:13:20.591822 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eec92350-c2e5-4223-82fe-2c3f78c7945f-machine-approver-tls\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:20.912470 master-0 kubenswrapper[7599]: I0313 01:13:20.912295 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 01:13:20.913430 master-0 kubenswrapper[7599]: I0313 01:13:20.913395 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.081275 master-0 kubenswrapper[7599]: I0313 01:13:21.081211 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.081488 master-0 kubenswrapper[7599]: I0313 01:13:21.081375 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.081560 master-0 kubenswrapper[7599]: I0313 01:13:21.081487 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.183396 master-0 kubenswrapper[7599]: I0313 01:13:21.183254 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.183396 master-0 kubenswrapper[7599]: I0313 01:13:21.183387 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.183690 master-0 kubenswrapper[7599]: I0313 01:13:21.183438 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.183690 master-0 kubenswrapper[7599]: I0313 01:13:21.183546 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.183896 master-0 kubenswrapper[7599]: I0313 01:13:21.183818 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:21.603381 master-0 kubenswrapper[7599]: I0313 01:13:21.586529 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 01:13:23.183897 master-0 kubenswrapper[7599]: I0313 01:13:23.183821 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 01:13:23.192995 master-0 kubenswrapper[7599]: I0313 01:13:23.192940 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 01:13:23.204974 master-0 kubenswrapper[7599]: I0313 01:13:23.204899 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f9jfk\" (UniqueName: \"kubernetes.io/projected/eec92350-c2e5-4223-82fe-2c3f78c7945f-kube-api-access-f9jfk\") pod \"machine-approver-955fcfb87-56dsn\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:23.336068 master-0 kubenswrapper[7599]: I0313 01:13:23.335919 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:13:24.604049 master-0 kubenswrapper[7599]: I0313 01:13:24.604006 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_2a39cf00-835b-4dfc-9455-71aa8f509347/installer/0.log" Mar 13 01:13:24.604492 master-0 kubenswrapper[7599]: I0313 01:13:24.604081 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:24.739915 master-0 kubenswrapper[7599]: I0313 01:13:24.739830 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-kubelet-dir\") pod \"2a39cf00-835b-4dfc-9455-71aa8f509347\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " Mar 13 01:13:24.739915 master-0 kubenswrapper[7599]: I0313 01:13:24.739919 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-var-lock\") pod \"2a39cf00-835b-4dfc-9455-71aa8f509347\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " Mar 13 01:13:24.740646 master-0 kubenswrapper[7599]: I0313 01:13:24.740040 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a39cf00-835b-4dfc-9455-71aa8f509347-kube-api-access\") pod \"2a39cf00-835b-4dfc-9455-71aa8f509347\" (UID: \"2a39cf00-835b-4dfc-9455-71aa8f509347\") " Mar 13 01:13:24.740891 master-0 kubenswrapper[7599]: I0313 01:13:24.740835 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-var-lock" (OuterVolumeSpecName: "var-lock") pod "2a39cf00-835b-4dfc-9455-71aa8f509347" (UID: "2a39cf00-835b-4dfc-9455-71aa8f509347"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:24.742945 master-0 kubenswrapper[7599]: I0313 01:13:24.742866 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2a39cf00-835b-4dfc-9455-71aa8f509347" (UID: "2a39cf00-835b-4dfc-9455-71aa8f509347"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:24.743731 master-0 kubenswrapper[7599]: I0313 01:13:24.743626 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2a39cf00-835b-4dfc-9455-71aa8f509347-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2a39cf00-835b-4dfc-9455-71aa8f509347" (UID: "2a39cf00-835b-4dfc-9455-71aa8f509347"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:24.792929 master-0 kubenswrapper[7599]: I0313 01:13:24.792864 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-3-master-0_2a39cf00-835b-4dfc-9455-71aa8f509347/installer/0.log" Mar 13 01:13:24.793094 master-0 kubenswrapper[7599]: I0313 01:13:24.792959 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-3-master-0" event={"ID":"2a39cf00-835b-4dfc-9455-71aa8f509347","Type":"ContainerDied","Data":"ad6c20f954ef6f52eaa154679c9ef06260294d3a3abe7a17a117f355c17b2bb2"} Mar 13 01:13:24.793094 master-0 kubenswrapper[7599]: I0313 01:13:24.793026 7599 scope.go:117] "RemoveContainer" containerID="80dda219c7bd72a8778fcc074747b2fcb68aa7675a6676f60bec319397926445" Mar 13 01:13:24.793227 master-0 kubenswrapper[7599]: I0313 01:13:24.793028 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-3-master-0" Mar 13 01:13:24.844029 master-0 kubenswrapper[7599]: I0313 01:13:24.843927 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:24.844029 master-0 kubenswrapper[7599]: I0313 01:13:24.844009 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/2a39cf00-835b-4dfc-9455-71aa8f509347-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:24.844029 master-0 kubenswrapper[7599]: I0313 01:13:24.844032 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2a39cf00-835b-4dfc-9455-71aa8f509347-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:25.015294 master-0 kubenswrapper[7599]: I0313 01:13:25.015176 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:25.150892 master-0 kubenswrapper[7599]: I0313 01:13:25.150791 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:13:26.027773 master-0 kubenswrapper[7599]: I0313 01:13:26.027688 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s"] Mar 13 01:13:26.028354 master-0 kubenswrapper[7599]: E0313 01:13:26.027927 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2a39cf00-835b-4dfc-9455-71aa8f509347" containerName="installer" Mar 13 01:13:26.028354 master-0 kubenswrapper[7599]: I0313 01:13:26.027940 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="2a39cf00-835b-4dfc-9455-71aa8f509347" containerName="installer" Mar 13 01:13:26.028354 master-0 kubenswrapper[7599]: I0313 01:13:26.028029 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="2a39cf00-835b-4dfc-9455-71aa8f509347" containerName="installer" Mar 13 01:13:26.030567 master-0 kubenswrapper[7599]: I0313 01:13:26.028564 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.034774 master-0 kubenswrapper[7599]: I0313 01:13:26.034699 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 01:13:26.036958 master-0 kubenswrapper[7599]: I0313 01:13:26.035567 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 01:13:26.036958 master-0 kubenswrapper[7599]: I0313 01:13:26.035820 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 01:13:26.061538 master-0 kubenswrapper[7599]: I0313 01:13:26.057870 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 01:13:26.064456 master-0 kubenswrapper[7599]: I0313 01:13:26.063444 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s"] Mar 13 01:13:26.076326 master-0 kubenswrapper[7599]: I0313 01:13:26.076266 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.076545 master-0 kubenswrapper[7599]: I0313 01:13:26.076435 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.076545 master-0 kubenswrapper[7599]: I0313 01:13:26.076529 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt62j\" (UniqueName: \"kubernetes.io/projected/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-kube-api-access-vt62j\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.154544 master-0 kubenswrapper[7599]: I0313 01:13:26.149107 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg"] Mar 13 01:13:26.154544 master-0 kubenswrapper[7599]: I0313 01:13:26.150008 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.154544 master-0 kubenswrapper[7599]: I0313 01:13:26.152076 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 01:13:26.154544 master-0 kubenswrapper[7599]: I0313 01:13:26.152579 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 01:13:26.157713 master-0 kubenswrapper[7599]: I0313 01:13:26.157679 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 01:13:26.170685 master-0 kubenswrapper[7599]: I0313 01:13:26.170621 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 01:13:26.172625 master-0 kubenswrapper[7599]: I0313 01:13:26.171830 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-3-master-0"] Mar 13 01:13:26.178036 master-0 kubenswrapper[7599]: I0313 01:13:26.177942 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.178036 master-0 kubenswrapper[7599]: I0313 01:13:26.178020 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt62j\" (UniqueName: \"kubernetes.io/projected/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-kube-api-access-vt62j\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.178275 master-0 kubenswrapper[7599]: I0313 01:13:26.178105 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.183626 master-0 kubenswrapper[7599]: I0313 01:13:26.181567 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.184001 master-0 kubenswrapper[7599]: I0313 01:13:26.183940 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg"] Mar 13 01:13:26.189173 master-0 kubenswrapper[7599]: I0313 01:13:26.188588 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.226841 master-0 kubenswrapper[7599]: I0313 01:13:26.226201 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt62j\" (UniqueName: \"kubernetes.io/projected/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-kube-api-access-vt62j\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.254270 master-0 kubenswrapper[7599]: I0313 01:13:26.251128 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt"] Mar 13 01:13:26.264315 master-0 kubenswrapper[7599]: I0313 01:13:26.264261 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.269815 master-0 kubenswrapper[7599]: I0313 01:13:26.267839 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 01:13:26.269815 master-0 kubenswrapper[7599]: I0313 01:13:26.268225 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 01:13:26.269815 master-0 kubenswrapper[7599]: I0313 01:13:26.268327 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 01:13:26.269815 master-0 kubenswrapper[7599]: I0313 01:13:26.268251 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 01:13:26.295794 master-0 kubenswrapper[7599]: I0313 01:13:26.286084 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98t7n\" (UniqueName: \"kubernetes.io/projected/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-kube-api-access-98t7n\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.295794 master-0 kubenswrapper[7599]: I0313 01:13:26.286175 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.310534 master-0 kubenswrapper[7599]: I0313 01:13:26.309586 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt"] Mar 13 01:13:26.335537 master-0 kubenswrapper[7599]: I0313 01:13:26.334689 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8"] Mar 13 01:13:26.354920 master-0 kubenswrapper[7599]: I0313 01:13:26.354261 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9"] Mar 13 01:13:26.355361 master-0 kubenswrapper[7599]: I0313 01:13:26.355309 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.358535 master-0 kubenswrapper[7599]: I0313 01:13:26.358462 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 01:13:26.363641 master-0 kubenswrapper[7599]: I0313 01:13:26.358862 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 01:13:26.363641 master-0 kubenswrapper[7599]: I0313 01:13:26.360629 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7"] Mar 13 01:13:26.367736 master-0 kubenswrapper[7599]: I0313 01:13:26.367116 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9"] Mar 13 01:13:26.387545 master-0 kubenswrapper[7599]: I0313 01:13:26.387467 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.387753 master-0 kubenswrapper[7599]: I0313 01:13:26.387553 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98t7n\" (UniqueName: \"kubernetes.io/projected/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-kube-api-access-98t7n\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.387753 master-0 kubenswrapper[7599]: I0313 01:13:26.387632 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.387753 master-0 kubenswrapper[7599]: I0313 01:13:26.387659 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.387753 master-0 kubenswrapper[7599]: I0313 01:13:26.387683 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.387753 master-0 kubenswrapper[7599]: I0313 01:13:26.387709 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9npsh\" (UniqueName: \"kubernetes.io/projected/21110b48-25fc-434a-b156-7f6bd6064bed-kube-api-access-9npsh\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.387753 master-0 kubenswrapper[7599]: I0313 01:13:26.387735 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.405115 master-0 kubenswrapper[7599]: I0313 01:13:26.403909 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.405115 master-0 kubenswrapper[7599]: I0313 01:13:26.404019 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:13:26.428620 master-0 kubenswrapper[7599]: I0313 01:13:26.428392 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98t7n\" (UniqueName: \"kubernetes.io/projected/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-kube-api-access-98t7n\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.492110 master-0 kubenswrapper[7599]: I0313 01:13:26.492065 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.492569 master-0 kubenswrapper[7599]: I0313 01:13:26.492551 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gh7ks\" (UniqueName: \"kubernetes.io/projected/2581e5b5-8cbb-4fa5-9888-98fb572a6232-kube-api-access-gh7ks\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.492714 master-0 kubenswrapper[7599]: I0313 01:13:26.492700 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.492893 master-0 kubenswrapper[7599]: I0313 01:13:26.492878 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.493096 master-0 kubenswrapper[7599]: I0313 01:13:26.493081 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.493235 master-0 kubenswrapper[7599]: I0313 01:13:26.493220 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9npsh\" (UniqueName: \"kubernetes.io/projected/21110b48-25fc-434a-b156-7f6bd6064bed-kube-api-access-9npsh\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.493522 master-0 kubenswrapper[7599]: I0313 01:13:26.493462 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.493578 master-0 kubenswrapper[7599]: I0313 01:13:26.493555 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.493785 master-0 kubenswrapper[7599]: I0313 01:13:26.493617 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.494298 master-0 kubenswrapper[7599]: I0313 01:13:26.494221 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.496453 master-0 kubenswrapper[7599]: I0313 01:13:26.496411 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.497495 master-0 kubenswrapper[7599]: I0313 01:13:26.497413 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.514620 master-0 kubenswrapper[7599]: I0313 01:13:26.514573 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9npsh\" (UniqueName: \"kubernetes.io/projected/21110b48-25fc-434a-b156-7f6bd6064bed-kube-api-access-9npsh\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.535303 master-0 kubenswrapper[7599]: I0313 01:13:26.535137 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:13:26.594725 master-0 kubenswrapper[7599]: I0313 01:13:26.594487 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.594725 master-0 kubenswrapper[7599]: I0313 01:13:26.594683 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh7ks\" (UniqueName: \"kubernetes.io/projected/2581e5b5-8cbb-4fa5-9888-98fb572a6232-kube-api-access-gh7ks\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.595836 master-0 kubenswrapper[7599]: I0313 01:13:26.594736 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.596168 master-0 kubenswrapper[7599]: I0313 01:13:26.596032 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.598522 master-0 kubenswrapper[7599]: I0313 01:13:26.598422 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.612762 master-0 kubenswrapper[7599]: I0313 01:13:26.612059 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:13:26.622817 master-0 kubenswrapper[7599]: I0313 01:13:26.622704 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh7ks\" (UniqueName: \"kubernetes.io/projected/2581e5b5-8cbb-4fa5-9888-98fb572a6232-kube-api-access-gh7ks\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.690332 master-0 kubenswrapper[7599]: I0313 01:13:26.690245 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:13:26.924646 master-0 kubenswrapper[7599]: I0313 01:13:26.924010 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm"] Mar 13 01:13:26.924816 master-0 kubenswrapper[7599]: I0313 01:13:26.924749 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:26.929204 master-0 kubenswrapper[7599]: I0313 01:13:26.929151 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 01:13:26.939586 master-0 kubenswrapper[7599]: I0313 01:13:26.939352 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm"] Mar 13 01:13:26.996916 master-0 kubenswrapper[7599]: I0313 01:13:26.996841 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a39cf00-835b-4dfc-9455-71aa8f509347" path="/var/lib/kubelet/pods/2a39cf00-835b-4dfc-9455-71aa8f509347/volumes" Mar 13 01:13:27.002489 master-0 kubenswrapper[7599]: I0313 01:13:27.002453 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:27.002610 master-0 kubenswrapper[7599]: I0313 01:13:27.002546 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvcz\" (UniqueName: \"kubernetes.io/projected/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-kube-api-access-psvcz\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:27.104263 master-0 kubenswrapper[7599]: I0313 01:13:27.104200 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvcz\" (UniqueName: \"kubernetes.io/projected/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-kube-api-access-psvcz\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:27.104263 master-0 kubenswrapper[7599]: I0313 01:13:27.104293 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:27.136551 master-0 kubenswrapper[7599]: I0313 01:13:27.123099 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:27.155709 master-0 kubenswrapper[7599]: I0313 01:13:27.154980 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvcz\" (UniqueName: \"kubernetes.io/projected/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-kube-api-access-psvcz\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:27.252620 master-0 kubenswrapper[7599]: I0313 01:13:27.252541 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:13:27.382069 master-0 kubenswrapper[7599]: I0313 01:13:27.382007 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-hn4jh"] Mar 13 01:13:27.384947 master-0 kubenswrapper[7599]: I0313 01:13:27.382756 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.388465 master-0 kubenswrapper[7599]: I0313 01:13:27.388261 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 01:13:27.388658 master-0 kubenswrapper[7599]: I0313 01:13:27.388585 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 01:13:27.388861 master-0 kubenswrapper[7599]: I0313 01:13:27.388827 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 01:13:27.388982 master-0 kubenswrapper[7599]: I0313 01:13:27.388955 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 01:13:27.397659 master-0 kubenswrapper[7599]: I0313 01:13:27.397578 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 01:13:27.398782 master-0 kubenswrapper[7599]: I0313 01:13:27.398736 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-hn4jh"] Mar 13 01:13:27.408465 master-0 kubenswrapper[7599]: I0313 01:13:27.408422 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.408601 master-0 kubenswrapper[7599]: I0313 01:13:27.408470 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.408601 master-0 kubenswrapper[7599]: I0313 01:13:27.408579 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.408675 master-0 kubenswrapper[7599]: I0313 01:13:27.408608 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6e799871-735a-44e8-8193-24c5bb388928-snapshots\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.408675 master-0 kubenswrapper[7599]: I0313 01:13:27.408628 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jthxn\" (UniqueName: \"kubernetes.io/projected/6e799871-735a-44e8-8193-24c5bb388928-kube-api-access-jthxn\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.518721 master-0 kubenswrapper[7599]: I0313 01:13:27.518121 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.518721 master-0 kubenswrapper[7599]: I0313 01:13:27.518182 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.518721 master-0 kubenswrapper[7599]: I0313 01:13:27.518249 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.518721 master-0 kubenswrapper[7599]: I0313 01:13:27.518289 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6e799871-735a-44e8-8193-24c5bb388928-snapshots\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.518721 master-0 kubenswrapper[7599]: I0313 01:13:27.518313 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jthxn\" (UniqueName: \"kubernetes.io/projected/6e799871-735a-44e8-8193-24c5bb388928-kube-api-access-jthxn\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.519804 master-0 kubenswrapper[7599]: I0313 01:13:27.519778 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.521116 master-0 kubenswrapper[7599]: I0313 01:13:27.521079 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.526736 master-0 kubenswrapper[7599]: I0313 01:13:27.526392 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6e799871-735a-44e8-8193-24c5bb388928-snapshots\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.536895 master-0 kubenswrapper[7599]: I0313 01:13:27.536654 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jthxn\" (UniqueName: \"kubernetes.io/projected/6e799871-735a-44e8-8193-24c5bb388928-kube-api-access-jthxn\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.539060 master-0 kubenswrapper[7599]: I0313 01:13:27.538806 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.711911 master-0 kubenswrapper[7599]: I0313 01:13:27.711851 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:13:27.854163 master-0 kubenswrapper[7599]: I0313 01:13:27.854041 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:13:28.069642 master-0 kubenswrapper[7599]: I0313 01:13:28.068954 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 01:13:28.069642 master-0 kubenswrapper[7599]: I0313 01:13:28.069209 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" containerID="cri-o://2a446f182b10829874f21b28a6050799a0e95cf3b7880d6db31740a7140ff67b" gracePeriod=30 Mar 13 01:13:28.069642 master-0 kubenswrapper[7599]: I0313 01:13:28.069377 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0-master-0" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" containerID="cri-o://a665d6a554bcc038bf3cf3aa905f1884c4c54fb9c32ce798ba9ecbaf1bab11e0" gracePeriod=30 Mar 13 01:13:28.073214 master-0 kubenswrapper[7599]: I0313 01:13:28.072523 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 01:13:28.073214 master-0 kubenswrapper[7599]: E0313 01:13:28.072823 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 01:13:28.073214 master-0 kubenswrapper[7599]: I0313 01:13:28.072837 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 01:13:28.073214 master-0 kubenswrapper[7599]: E0313 01:13:28.072864 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 01:13:28.073214 master-0 kubenswrapper[7599]: I0313 01:13:28.072874 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 01:13:28.073214 master-0 kubenswrapper[7599]: I0313 01:13:28.072998 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcdctl" Mar 13 01:13:28.073214 master-0 kubenswrapper[7599]: I0313 01:13:28.073013 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="354f29997baa583b6238f7de9108ee10" containerName="etcd" Mar 13 01:13:28.075104 master-0 kubenswrapper[7599]: I0313 01:13:28.074585 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.129794 master-0 kubenswrapper[7599]: I0313 01:13:28.129581 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.130861 master-0 kubenswrapper[7599]: I0313 01:13:28.130843 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.130991 master-0 kubenswrapper[7599]: I0313 01:13:28.130978 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.131126 master-0 kubenswrapper[7599]: I0313 01:13:28.131092 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.131541 master-0 kubenswrapper[7599]: I0313 01:13:28.131498 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.131701 master-0 kubenswrapper[7599]: I0313 01:13:28.131677 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.233724 master-0 kubenswrapper[7599]: I0313 01:13:28.233651 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.233724 master-0 kubenswrapper[7599]: I0313 01:13:28.233731 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234063 master-0 kubenswrapper[7599]: I0313 01:13:28.233778 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234063 master-0 kubenswrapper[7599]: I0313 01:13:28.233802 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234063 master-0 kubenswrapper[7599]: I0313 01:13:28.233818 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234063 master-0 kubenswrapper[7599]: I0313 01:13:28.233858 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234063 master-0 kubenswrapper[7599]: I0313 01:13:28.233959 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234063 master-0 kubenswrapper[7599]: I0313 01:13:28.234016 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234063 master-0 kubenswrapper[7599]: I0313 01:13:28.234039 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234306 master-0 kubenswrapper[7599]: I0313 01:13:28.234075 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234306 master-0 kubenswrapper[7599]: I0313 01:13:28.234098 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:28.234306 master-0 kubenswrapper[7599]: I0313 01:13:28.234121 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:13:31.840986 master-0 kubenswrapper[7599]: I0313 01:13:31.840905 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_6c32e816-aa69-4e9c-9fbf-56595c764f3b/installer/0.log" Mar 13 01:13:31.841987 master-0 kubenswrapper[7599]: I0313 01:13:31.841019 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:13:31.842161 master-0 kubenswrapper[7599]: I0313 01:13:31.842084 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-1-master-0_6c32e816-aa69-4e9c-9fbf-56595c764f3b/installer/0.log" Mar 13 01:13:31.842317 master-0 kubenswrapper[7599]: I0313 01:13:31.842229 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-1-master-0" event={"ID":"6c32e816-aa69-4e9c-9fbf-56595c764f3b","Type":"ContainerDied","Data":"063df2d43e5cbeb2c97fe2580ecd4460c3cfd1e7790de2a7bf5d6090738d8fb2"} Mar 13 01:13:31.842317 master-0 kubenswrapper[7599]: I0313 01:13:31.842288 7599 scope.go:117] "RemoveContainer" containerID="17c0598fb82fc85207d161703480300077fafb1372eee649f6385e8290aca19a" Mar 13 01:13:31.925489 master-0 kubenswrapper[7599]: I0313 01:13:31.925436 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kube-api-access\") pod \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " Mar 13 01:13:31.925763 master-0 kubenswrapper[7599]: I0313 01:13:31.925722 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kubelet-dir\") pod \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " Mar 13 01:13:31.925849 master-0 kubenswrapper[7599]: I0313 01:13:31.925790 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-var-lock\") pod \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\" (UID: \"6c32e816-aa69-4e9c-9fbf-56595c764f3b\") " Mar 13 01:13:31.926096 master-0 kubenswrapper[7599]: I0313 01:13:31.926036 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6c32e816-aa69-4e9c-9fbf-56595c764f3b" (UID: "6c32e816-aa69-4e9c-9fbf-56595c764f3b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:31.926096 master-0 kubenswrapper[7599]: I0313 01:13:31.926076 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-var-lock" (OuterVolumeSpecName: "var-lock") pod "6c32e816-aa69-4e9c-9fbf-56595c764f3b" (UID: "6c32e816-aa69-4e9c-9fbf-56595c764f3b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:31.930502 master-0 kubenswrapper[7599]: I0313 01:13:31.930453 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6c32e816-aa69-4e9c-9fbf-56595c764f3b" (UID: "6c32e816-aa69-4e9c-9fbf-56595c764f3b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:32.027609 master-0 kubenswrapper[7599]: I0313 01:13:32.027552 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:32.027609 master-0 kubenswrapper[7599]: I0313 01:13:32.027592 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6c32e816-aa69-4e9c-9fbf-56595c764f3b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:32.027609 master-0 kubenswrapper[7599]: I0313 01:13:32.027603 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6c32e816-aa69-4e9c-9fbf-56595c764f3b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:32.849593 master-0 kubenswrapper[7599]: I0313 01:13:32.848624 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-1-master-0" Mar 13 01:13:33.071608 master-0 kubenswrapper[7599]: W0313 01:13:33.071472 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeec92350_c2e5_4223_82fe_2c3f78c7945f.slice/crio-84da267170c9e91b410d7e9d9438b6c48844d88a0a7765f16ae9587a89797c0b WatchSource:0}: Error finding container 84da267170c9e91b410d7e9d9438b6c48844d88a0a7765f16ae9587a89797c0b: Status 404 returned error can't find the container with id 84da267170c9e91b410d7e9d9438b6c48844d88a0a7765f16ae9587a89797c0b Mar 13 01:13:33.862609 master-0 kubenswrapper[7599]: I0313 01:13:33.862524 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" event={"ID":"eec92350-c2e5-4223-82fe-2c3f78c7945f","Type":"ContainerStarted","Data":"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9"} Mar 13 01:13:33.863156 master-0 kubenswrapper[7599]: I0313 01:13:33.862641 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" event={"ID":"eec92350-c2e5-4223-82fe-2c3f78c7945f","Type":"ContainerStarted","Data":"84da267170c9e91b410d7e9d9438b6c48844d88a0a7765f16ae9587a89797c0b"} Mar 13 01:13:33.864865 master-0 kubenswrapper[7599]: I0313 01:13:33.864801 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzlpt" event={"ID":"40c57f94-16b7-4011-bc29-386d52a06d2a","Type":"ContainerStarted","Data":"077aaebe5d05ea235d4155fe2579604bd5aaa26272fc52bf8e69c62760433c36"} Mar 13 01:13:33.867590 master-0 kubenswrapper[7599]: I0313 01:13:33.867551 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" event={"ID":"d3a666ab-7b35-463e-b5fa-ecaa147296e8","Type":"ContainerStarted","Data":"bd8f5db9024b69d4f03b0f10de7429d9d40f50963031947d43152cb3b07cc22c"} Mar 13 01:13:33.867724 master-0 kubenswrapper[7599]: I0313 01:13:33.867603 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" podUID="d3a666ab-7b35-463e-b5fa-ecaa147296e8" containerName="route-controller-manager" containerID="cri-o://bd8f5db9024b69d4f03b0f10de7429d9d40f50963031947d43152cb3b07cc22c" gracePeriod=30 Mar 13 01:13:33.867925 master-0 kubenswrapper[7599]: I0313 01:13:33.867889 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:33.871302 master-0 kubenswrapper[7599]: I0313 01:13:33.871247 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerStarted","Data":"ab0441da017b242d280ba9219e193f7d2acb102387dc5709f3d4ed81eb17fad9"} Mar 13 01:13:33.872992 master-0 kubenswrapper[7599]: I0313 01:13:33.872937 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerStarted","Data":"d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f"} Mar 13 01:13:33.877045 master-0 kubenswrapper[7599]: I0313 01:13:33.875468 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" podUID="95849efd-fabc-4e21-82e1-a15bc6eee2ba" containerName="controller-manager" containerID="cri-o://6396054c67f8f967f93c8871ea043c327625275a7f1b4769a28ba814149a8b42" gracePeriod=30 Mar 13 01:13:33.877045 master-0 kubenswrapper[7599]: I0313 01:13:33.875529 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" event={"ID":"95849efd-fabc-4e21-82e1-a15bc6eee2ba","Type":"ContainerStarted","Data":"6396054c67f8f967f93c8871ea043c327625275a7f1b4769a28ba814149a8b42"} Mar 13 01:13:33.877045 master-0 kubenswrapper[7599]: I0313 01:13:33.875583 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:33.880906 master-0 kubenswrapper[7599]: I0313 01:13:33.878137 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnmjr" event={"ID":"39bfb7e2-d1a8-4791-a52e-72f2b4790f96","Type":"ContainerStarted","Data":"bbd115c3920bc3d2b6483fd0c3c7e46a8152587c78c6bc52a5fe4a31a5ba7a98"} Mar 13 01:13:33.899149 master-0 kubenswrapper[7599]: I0313 01:13:33.898035 7599 patch_prober.go:28] interesting pod/route-controller-manager-748966cb9f-wnsx7 container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.52:8443/healthz\": read tcp 10.128.0.2:52898->10.128.0.52:8443: read: connection reset by peer" start-of-body= Mar 13 01:13:33.899149 master-0 kubenswrapper[7599]: I0313 01:13:33.898112 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" podUID="d3a666ab-7b35-463e-b5fa-ecaa147296e8" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.128.0.52:8443/healthz\": read tcp 10.128.0.2:52898->10.128.0.52:8443: read: connection reset by peer" Mar 13 01:13:33.899149 master-0 kubenswrapper[7599]: I0313 01:13:33.898291 7599 patch_prober.go:28] interesting pod/controller-manager-6d46b9fb7-t9sp8 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.53:8443/healthz\": read tcp 10.128.0.2:48548->10.128.0.53:8443: read: connection reset by peer" start-of-body= Mar 13 01:13:33.899149 master-0 kubenswrapper[7599]: I0313 01:13:33.898314 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" podUID="95849efd-fabc-4e21-82e1-a15bc6eee2ba" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.53:8443/healthz\": read tcp 10.128.0.2:48548->10.128.0.53:8443: read: connection reset by peer" Mar 13 01:13:34.938983 master-0 kubenswrapper[7599]: I0313 01:13:34.938755 7599 generic.go:334] "Generic (PLEG): container finished" podID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerID="ab0441da017b242d280ba9219e193f7d2acb102387dc5709f3d4ed81eb17fad9" exitCode=0 Mar 13 01:13:34.938983 master-0 kubenswrapper[7599]: I0313 01:13:34.938910 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerDied","Data":"ab0441da017b242d280ba9219e193f7d2acb102387dc5709f3d4ed81eb17fad9"} Mar 13 01:13:34.942474 master-0 kubenswrapper[7599]: I0313 01:13:34.941499 7599 generic.go:334] "Generic (PLEG): container finished" podID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerID="077aaebe5d05ea235d4155fe2579604bd5aaa26272fc52bf8e69c62760433c36" exitCode=0 Mar 13 01:13:34.942474 master-0 kubenswrapper[7599]: I0313 01:13:34.941567 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzlpt" event={"ID":"40c57f94-16b7-4011-bc29-386d52a06d2a","Type":"ContainerDied","Data":"077aaebe5d05ea235d4155fe2579604bd5aaa26272fc52bf8e69c62760433c36"} Mar 13 01:13:34.944406 master-0 kubenswrapper[7599]: I0313 01:13:34.944347 7599 generic.go:334] "Generic (PLEG): container finished" podID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerID="d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f" exitCode=0 Mar 13 01:13:34.944504 master-0 kubenswrapper[7599]: I0313 01:13:34.944467 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerDied","Data":"d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f"} Mar 13 01:13:34.947019 master-0 kubenswrapper[7599]: I0313 01:13:34.946871 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-748966cb9f-wnsx7_d3a666ab-7b35-463e-b5fa-ecaa147296e8/route-controller-manager/0.log" Mar 13 01:13:34.947019 master-0 kubenswrapper[7599]: I0313 01:13:34.946926 7599 generic.go:334] "Generic (PLEG): container finished" podID="d3a666ab-7b35-463e-b5fa-ecaa147296e8" containerID="bd8f5db9024b69d4f03b0f10de7429d9d40f50963031947d43152cb3b07cc22c" exitCode=255 Mar 13 01:13:34.947179 master-0 kubenswrapper[7599]: I0313 01:13:34.947035 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" event={"ID":"d3a666ab-7b35-463e-b5fa-ecaa147296e8","Type":"ContainerDied","Data":"bd8f5db9024b69d4f03b0f10de7429d9d40f50963031947d43152cb3b07cc22c"} Mar 13 01:13:34.949772 master-0 kubenswrapper[7599]: I0313 01:13:34.949621 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" event={"ID":"95849efd-fabc-4e21-82e1-a15bc6eee2ba","Type":"ContainerDied","Data":"6396054c67f8f967f93c8871ea043c327625275a7f1b4769a28ba814149a8b42"} Mar 13 01:13:34.949772 master-0 kubenswrapper[7599]: I0313 01:13:34.949637 7599 generic.go:334] "Generic (PLEG): container finished" podID="95849efd-fabc-4e21-82e1-a15bc6eee2ba" containerID="6396054c67f8f967f93c8871ea043c327625275a7f1b4769a28ba814149a8b42" exitCode=0 Mar 13 01:13:34.952480 master-0 kubenswrapper[7599]: I0313 01:13:34.951744 7599 generic.go:334] "Generic (PLEG): container finished" podID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerID="bbd115c3920bc3d2b6483fd0c3c7e46a8152587c78c6bc52a5fe4a31a5ba7a98" exitCode=0 Mar 13 01:13:34.952480 master-0 kubenswrapper[7599]: I0313 01:13:34.951781 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnmjr" event={"ID":"39bfb7e2-d1a8-4791-a52e-72f2b4790f96","Type":"ContainerDied","Data":"bbd115c3920bc3d2b6483fd0c3c7e46a8152587c78c6bc52a5fe4a31a5ba7a98"} Mar 13 01:13:35.399745 master-0 kubenswrapper[7599]: I0313 01:13:35.399697 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-748966cb9f-wnsx7_d3a666ab-7b35-463e-b5fa-ecaa147296e8/route-controller-manager/0.log" Mar 13 01:13:35.399936 master-0 kubenswrapper[7599]: I0313 01:13:35.399775 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:35.403172 master-0 kubenswrapper[7599]: I0313 01:13:35.402995 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.535748 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-client-ca\") pod \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.536237 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nlxn8\" (UniqueName: \"kubernetes.io/projected/d3a666ab-7b35-463e-b5fa-ecaa147296e8-kube-api-access-nlxn8\") pod \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.536353 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-client-ca\") pod \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.536400 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t5bj2\" (UniqueName: \"kubernetes.io/projected/95849efd-fabc-4e21-82e1-a15bc6eee2ba-kube-api-access-t5bj2\") pod \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.536439 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-config\") pod \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.536581 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a666ab-7b35-463e-b5fa-ecaa147296e8-serving-cert\") pod \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\" (UID: \"d3a666ab-7b35-463e-b5fa-ecaa147296e8\") " Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.536604 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-proxy-ca-bundles\") pod \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " Mar 13 01:13:35.536953 master-0 kubenswrapper[7599]: I0313 01:13:35.536728 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-client-ca" (OuterVolumeSpecName: "client-ca") pod "95849efd-fabc-4e21-82e1-a15bc6eee2ba" (UID: "95849efd-fabc-4e21-82e1-a15bc6eee2ba"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:35.537583 master-0 kubenswrapper[7599]: I0313 01:13:35.537211 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95849efd-fabc-4e21-82e1-a15bc6eee2ba-serving-cert\") pod \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " Mar 13 01:13:35.537583 master-0 kubenswrapper[7599]: I0313 01:13:35.537333 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-config\") pod \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\" (UID: \"95849efd-fabc-4e21-82e1-a15bc6eee2ba\") " Mar 13 01:13:35.537583 master-0 kubenswrapper[7599]: I0313 01:13:35.537256 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "95849efd-fabc-4e21-82e1-a15bc6eee2ba" (UID: "95849efd-fabc-4e21-82e1-a15bc6eee2ba"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:35.537583 master-0 kubenswrapper[7599]: I0313 01:13:35.537389 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-config" (OuterVolumeSpecName: "config") pod "d3a666ab-7b35-463e-b5fa-ecaa147296e8" (UID: "d3a666ab-7b35-463e-b5fa-ecaa147296e8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:35.538193 master-0 kubenswrapper[7599]: I0313 01:13:35.537901 7599 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.538193 master-0 kubenswrapper[7599]: I0313 01:13:35.537931 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.538193 master-0 kubenswrapper[7599]: I0313 01:13:35.537948 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.538193 master-0 kubenswrapper[7599]: I0313 01:13:35.538137 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-client-ca" (OuterVolumeSpecName: "client-ca") pod "d3a666ab-7b35-463e-b5fa-ecaa147296e8" (UID: "d3a666ab-7b35-463e-b5fa-ecaa147296e8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:35.538453 master-0 kubenswrapper[7599]: I0313 01:13:35.538399 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-config" (OuterVolumeSpecName: "config") pod "95849efd-fabc-4e21-82e1-a15bc6eee2ba" (UID: "95849efd-fabc-4e21-82e1-a15bc6eee2ba"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:13:35.540642 master-0 kubenswrapper[7599]: I0313 01:13:35.540598 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3a666ab-7b35-463e-b5fa-ecaa147296e8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d3a666ab-7b35-463e-b5fa-ecaa147296e8" (UID: "d3a666ab-7b35-463e-b5fa-ecaa147296e8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:13:35.540743 master-0 kubenswrapper[7599]: I0313 01:13:35.540709 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95849efd-fabc-4e21-82e1-a15bc6eee2ba-kube-api-access-t5bj2" (OuterVolumeSpecName: "kube-api-access-t5bj2") pod "95849efd-fabc-4e21-82e1-a15bc6eee2ba" (UID: "95849efd-fabc-4e21-82e1-a15bc6eee2ba"). InnerVolumeSpecName "kube-api-access-t5bj2". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:35.542875 master-0 kubenswrapper[7599]: I0313 01:13:35.542809 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95849efd-fabc-4e21-82e1-a15bc6eee2ba-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "95849efd-fabc-4e21-82e1-a15bc6eee2ba" (UID: "95849efd-fabc-4e21-82e1-a15bc6eee2ba"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:13:35.556436 master-0 kubenswrapper[7599]: I0313 01:13:35.556381 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3a666ab-7b35-463e-b5fa-ecaa147296e8-kube-api-access-nlxn8" (OuterVolumeSpecName: "kube-api-access-nlxn8") pod "d3a666ab-7b35-463e-b5fa-ecaa147296e8" (UID: "d3a666ab-7b35-463e-b5fa-ecaa147296e8"). InnerVolumeSpecName "kube-api-access-nlxn8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:35.639697 master-0 kubenswrapper[7599]: I0313 01:13:35.639571 7599 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d3a666ab-7b35-463e-b5fa-ecaa147296e8-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.639697 master-0 kubenswrapper[7599]: I0313 01:13:35.639618 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t5bj2\" (UniqueName: \"kubernetes.io/projected/95849efd-fabc-4e21-82e1-a15bc6eee2ba-kube-api-access-t5bj2\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.639697 master-0 kubenswrapper[7599]: I0313 01:13:35.639634 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d3a666ab-7b35-463e-b5fa-ecaa147296e8-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.639697 master-0 kubenswrapper[7599]: I0313 01:13:35.639648 7599 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95849efd-fabc-4e21-82e1-a15bc6eee2ba-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.639697 master-0 kubenswrapper[7599]: I0313 01:13:35.639665 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95849efd-fabc-4e21-82e1-a15bc6eee2ba-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:35.639697 master-0 kubenswrapper[7599]: I0313 01:13:35.639679 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nlxn8\" (UniqueName: \"kubernetes.io/projected/d3a666ab-7b35-463e-b5fa-ecaa147296e8-kube-api-access-nlxn8\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:36.775563 master-0 kubenswrapper[7599]: I0313 01:13:35.959615 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" event={"ID":"95849efd-fabc-4e21-82e1-a15bc6eee2ba","Type":"ContainerDied","Data":"807c4facb58060fdd54ffa474fd915201f6855e041826ad6bd8e340dbc080dd4"} Mar 13 01:13:36.775563 master-0 kubenswrapper[7599]: I0313 01:13:35.959708 7599 scope.go:117] "RemoveContainer" containerID="6396054c67f8f967f93c8871ea043c327625275a7f1b4769a28ba814149a8b42" Mar 13 01:13:36.775563 master-0 kubenswrapper[7599]: I0313 01:13:35.959900 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8" Mar 13 01:13:36.775563 master-0 kubenswrapper[7599]: I0313 01:13:35.976531 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-748966cb9f-wnsx7_d3a666ab-7b35-463e-b5fa-ecaa147296e8/route-controller-manager/0.log" Mar 13 01:13:36.775563 master-0 kubenswrapper[7599]: I0313 01:13:35.976586 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" event={"ID":"d3a666ab-7b35-463e-b5fa-ecaa147296e8","Type":"ContainerDied","Data":"d926842e3adb53b4cd63fe95b774afe59513b6565439305b9dd8b6b4b8718e8b"} Mar 13 01:13:36.775563 master-0 kubenswrapper[7599]: I0313 01:13:35.976671 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7" Mar 13 01:13:36.775563 master-0 kubenswrapper[7599]: I0313 01:13:36.029337 7599 scope.go:117] "RemoveContainer" containerID="bd8f5db9024b69d4f03b0f10de7429d9d40f50963031947d43152cb3b07cc22c" Mar 13 01:13:37.268091 master-0 kubenswrapper[7599]: I0313 01:13:37.268025 7599 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-8r87t container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 01:13:37.268291 master-0 kubenswrapper[7599]: I0313 01:13:37.268097 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" podUID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 01:13:37.868147 master-0 kubenswrapper[7599]: I0313 01:13:37.868110 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_94588bf1-f4cd-4446-999e-0039539e65a5/installer/0.log" Mar 13 01:13:37.868697 master-0 kubenswrapper[7599]: I0313 01:13:37.868179 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:37.994966 master-0 kubenswrapper[7599]: I0313 01:13:37.994738 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-1-master-0_94588bf1-f4cd-4446-999e-0039539e65a5/installer/0.log" Mar 13 01:13:37.994966 master-0 kubenswrapper[7599]: I0313 01:13:37.994846 7599 generic.go:334] "Generic (PLEG): container finished" podID="94588bf1-f4cd-4446-999e-0039539e65a5" containerID="a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046" exitCode=1 Mar 13 01:13:37.994966 master-0 kubenswrapper[7599]: I0313 01:13:37.994910 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-1-master-0" Mar 13 01:13:37.995458 master-0 kubenswrapper[7599]: I0313 01:13:37.994906 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"94588bf1-f4cd-4446-999e-0039539e65a5","Type":"ContainerDied","Data":"a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046"} Mar 13 01:13:37.995458 master-0 kubenswrapper[7599]: I0313 01:13:37.995140 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-1-master-0" event={"ID":"94588bf1-f4cd-4446-999e-0039539e65a5","Type":"ContainerDied","Data":"5244f7095c3f678f82891d0b5312367cb0c23c63204c4e8de4031d103c9168b7"} Mar 13 01:13:37.995458 master-0 kubenswrapper[7599]: I0313 01:13:37.995175 7599 scope.go:117] "RemoveContainer" containerID="a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046" Mar 13 01:13:37.997001 master-0 kubenswrapper[7599]: I0313 01:13:37.996476 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94588bf1-f4cd-4446-999e-0039539e65a5-kube-api-access\") pod \"94588bf1-f4cd-4446-999e-0039539e65a5\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " Mar 13 01:13:37.997001 master-0 kubenswrapper[7599]: I0313 01:13:37.996715 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-var-lock\") pod \"94588bf1-f4cd-4446-999e-0039539e65a5\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " Mar 13 01:13:37.997001 master-0 kubenswrapper[7599]: I0313 01:13:37.996790 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-kubelet-dir\") pod \"94588bf1-f4cd-4446-999e-0039539e65a5\" (UID: \"94588bf1-f4cd-4446-999e-0039539e65a5\") " Mar 13 01:13:37.997001 master-0 kubenswrapper[7599]: I0313 01:13:37.996906 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "94588bf1-f4cd-4446-999e-0039539e65a5" (UID: "94588bf1-f4cd-4446-999e-0039539e65a5"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:37.997001 master-0 kubenswrapper[7599]: I0313 01:13:37.996892 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-var-lock" (OuterVolumeSpecName: "var-lock") pod "94588bf1-f4cd-4446-999e-0039539e65a5" (UID: "94588bf1-f4cd-4446-999e-0039539e65a5"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:37.997369 master-0 kubenswrapper[7599]: I0313 01:13:37.997331 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:37.997412 master-0 kubenswrapper[7599]: I0313 01:13:37.997370 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/94588bf1-f4cd-4446-999e-0039539e65a5-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:38.001252 master-0 kubenswrapper[7599]: I0313 01:13:38.001144 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94588bf1-f4cd-4446-999e-0039539e65a5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "94588bf1-f4cd-4446-999e-0039539e65a5" (UID: "94588bf1-f4cd-4446-999e-0039539e65a5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:38.024203 master-0 kubenswrapper[7599]: I0313 01:13:38.024138 7599 scope.go:117] "RemoveContainer" containerID="a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046" Mar 13 01:13:38.024937 master-0 kubenswrapper[7599]: E0313 01:13:38.024819 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046\": container with ID starting with a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046 not found: ID does not exist" containerID="a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046" Mar 13 01:13:38.024937 master-0 kubenswrapper[7599]: I0313 01:13:38.024900 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046"} err="failed to get container status \"a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046\": rpc error: code = NotFound desc = could not find container \"a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046\": container with ID starting with a3722acaa1c717c4394ca4e51354923ece4563b44facb75a8eeaa1bc6b7db046 not found: ID does not exist" Mar 13 01:13:38.100882 master-0 kubenswrapper[7599]: I0313 01:13:38.099533 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/94588bf1-f4cd-4446-999e-0039539e65a5-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:41.022725 master-0 kubenswrapper[7599]: I0313 01:13:41.022569 7599 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff" exitCode=1 Mar 13 01:13:41.022725 master-0 kubenswrapper[7599]: I0313 01:13:41.022665 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff"} Mar 13 01:13:41.023374 master-0 kubenswrapper[7599]: I0313 01:13:41.022770 7599 scope.go:117] "RemoveContainer" containerID="bbc1eef4848241d60b2e14297f83c2738656d477e9aca36b48290bd2306fa11f" Mar 13 01:13:41.023719 master-0 kubenswrapper[7599]: I0313 01:13:41.023676 7599 scope.go:117] "RemoveContainer" containerID="22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff" Mar 13 01:13:41.118819 master-0 kubenswrapper[7599]: E0313 01:13:41.118749 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 01:13:41.119658 master-0 kubenswrapper[7599]: I0313 01:13:41.119632 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 01:13:42.030400 master-0 kubenswrapper[7599]: I0313 01:13:42.030343 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" event={"ID":"eec92350-c2e5-4223-82fe-2c3f78c7945f","Type":"ContainerStarted","Data":"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67"} Mar 13 01:13:43.216528 master-0 kubenswrapper[7599]: I0313 01:13:43.216410 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:13:43.404663 master-0 kubenswrapper[7599]: I0313 01:13:43.404538 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:13:44.081933 master-0 kubenswrapper[7599]: W0313 01:13:44.081865 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8e52bef89f4b50e4590a1719bcc5d7e5.slice/crio-9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9 WatchSource:0}: Error finding container 9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9: Status 404 returned error can't find the container with id 9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9 Mar 13 01:13:45.057990 master-0 kubenswrapper[7599]: I0313 01:13:45.057822 7599 generic.go:334] "Generic (PLEG): container finished" podID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerID="0f4de141c58d0310f424a3def148eab28bc960622ee39d63fb837590fa97a3c8" exitCode=0 Mar 13 01:13:45.057990 master-0 kubenswrapper[7599]: I0313 01:13:45.057899 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"dfb4407e-71fc-4684-aded-cc84f7e306dc","Type":"ContainerDied","Data":"0f4de141c58d0310f424a3def148eab28bc960622ee39d63fb837590fa97a3c8"} Mar 13 01:13:45.060643 master-0 kubenswrapper[7599]: I0313 01:13:45.060619 7599 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="41a562ba2a46ef687ff091bc533dc160a94bdc1572141710b80e92f2c08eb013" exitCode=1 Mar 13 01:13:45.060709 master-0 kubenswrapper[7599]: I0313 01:13:45.060686 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerDied","Data":"41a562ba2a46ef687ff091bc533dc160a94bdc1572141710b80e92f2c08eb013"} Mar 13 01:13:45.061248 master-0 kubenswrapper[7599]: I0313 01:13:45.061218 7599 scope.go:117] "RemoveContainer" containerID="41a562ba2a46ef687ff091bc533dc160a94bdc1572141710b80e92f2c08eb013" Mar 13 01:13:45.063038 master-0 kubenswrapper[7599]: I0313 01:13:45.062970 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"4c5b2d8c08ccdfef9dcab32e4f7ca60deac949b04ad9ebcfbb4f605f23b2baeb"} Mar 13 01:13:45.063108 master-0 kubenswrapper[7599]: I0313 01:13:45.063045 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9"} Mar 13 01:13:45.066219 master-0 kubenswrapper[7599]: I0313 01:13:45.066192 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173"} Mar 13 01:13:45.076260 master-0 kubenswrapper[7599]: E0313 01:13:45.075792 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:13:45.225498 master-0 kubenswrapper[7599]: E0313 01:13:45.225245 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:13:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:13:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:13:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:13:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cf9670d0f269f8d49fd9ef4981999be195f6624a4146aa93d9201eb8acc81053\\\"],\\\"sizeBytes\\\":443271011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8ceca1efee55b9fd5089428476bbc401fe73db7c0b0f5e16d4ad28ed0f0f9d43\\\"],\\\"sizeBytes\\\":438654375},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ace4dcd008420277d915fe983b07bbb50fb3ab0673f28d0166424a75bc2137e7\\\"],\\\"sizeBytes\\\":411585608},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c8f0fda36e9a2040dbe0537361dcd73658df4e669d846f8101a8f9f29f0be9a7\\\"],\\\"sizeBytes\\\":407347126},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3\\\"],\\\"sizeBytes\\\":396521759}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:13:46.076056 master-0 kubenswrapper[7599]: I0313 01:13:46.075966 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-scheduler-master-0" event={"ID":"a1a56802af72ce1aac6b5077f1695ac0","Type":"ContainerStarted","Data":"9ffa27ab0dc3e98ab44b8a36575c0b8aebd551a30b7af7d3a867758695337923"} Mar 13 01:13:46.078194 master-0 kubenswrapper[7599]: I0313 01:13:46.078141 7599 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="4c5b2d8c08ccdfef9dcab32e4f7ca60deac949b04ad9ebcfbb4f605f23b2baeb" exitCode=0 Mar 13 01:13:46.078582 master-0 kubenswrapper[7599]: I0313 01:13:46.078199 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"4c5b2d8c08ccdfef9dcab32e4f7ca60deac949b04ad9ebcfbb4f605f23b2baeb"} Mar 13 01:13:47.324441 master-0 kubenswrapper[7599]: I0313 01:13:47.324345 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 01:13:47.470988 master-0 kubenswrapper[7599]: I0313 01:13:47.470932 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-var-lock\") pod \"dfb4407e-71fc-4684-aded-cc84f7e306dc\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " Mar 13 01:13:47.471113 master-0 kubenswrapper[7599]: I0313 01:13:47.471061 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-kubelet-dir\") pod \"dfb4407e-71fc-4684-aded-cc84f7e306dc\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " Mar 13 01:13:47.471258 master-0 kubenswrapper[7599]: I0313 01:13:47.471107 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-var-lock" (OuterVolumeSpecName: "var-lock") pod "dfb4407e-71fc-4684-aded-cc84f7e306dc" (UID: "dfb4407e-71fc-4684-aded-cc84f7e306dc"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:47.471258 master-0 kubenswrapper[7599]: I0313 01:13:47.471166 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dfb4407e-71fc-4684-aded-cc84f7e306dc" (UID: "dfb4407e-71fc-4684-aded-cc84f7e306dc"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:47.471381 master-0 kubenswrapper[7599]: I0313 01:13:47.471189 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb4407e-71fc-4684-aded-cc84f7e306dc-kube-api-access\") pod \"dfb4407e-71fc-4684-aded-cc84f7e306dc\" (UID: \"dfb4407e-71fc-4684-aded-cc84f7e306dc\") " Mar 13 01:13:47.472066 master-0 kubenswrapper[7599]: I0313 01:13:47.472030 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:47.472066 master-0 kubenswrapper[7599]: I0313 01:13:47.472059 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dfb4407e-71fc-4684-aded-cc84f7e306dc-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:47.479739 master-0 kubenswrapper[7599]: I0313 01:13:47.479675 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb4407e-71fc-4684-aded-cc84f7e306dc-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dfb4407e-71fc-4684-aded-cc84f7e306dc" (UID: "dfb4407e-71fc-4684-aded-cc84f7e306dc"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:13:47.572625 master-0 kubenswrapper[7599]: I0313 01:13:47.572583 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dfb4407e-71fc-4684-aded-cc84f7e306dc-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:48.090975 master-0 kubenswrapper[7599]: I0313 01:13:48.090920 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnmjr" event={"ID":"39bfb7e2-d1a8-4791-a52e-72f2b4790f96","Type":"ContainerStarted","Data":"5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58"} Mar 13 01:13:48.093330 master-0 kubenswrapper[7599]: I0313 01:13:48.093302 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerStarted","Data":"b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215"} Mar 13 01:13:48.095145 master-0 kubenswrapper[7599]: I0313 01:13:48.095119 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-1-master-0" event={"ID":"dfb4407e-71fc-4684-aded-cc84f7e306dc","Type":"ContainerDied","Data":"1c6514526947873408e0b49fddc6682f5c16ba101c6fab277e750a3d8d114b4c"} Mar 13 01:13:48.095209 master-0 kubenswrapper[7599]: I0313 01:13:48.095147 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6514526947873408e0b49fddc6682f5c16ba101c6fab277e750a3d8d114b4c" Mar 13 01:13:48.095209 master-0 kubenswrapper[7599]: I0313 01:13:48.095202 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 01:13:48.104108 master-0 kubenswrapper[7599]: I0313 01:13:48.104085 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzlpt" event={"ID":"40c57f94-16b7-4011-bc29-386d52a06d2a","Type":"ContainerStarted","Data":"400c82d44d8e2549c63519241a4fc52c8892085f2c7319dde110c4565e584937"} Mar 13 01:13:48.106122 master-0 kubenswrapper[7599]: I0313 01:13:48.106083 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerStarted","Data":"b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096"} Mar 13 01:13:48.168776 master-0 kubenswrapper[7599]: I0313 01:13:48.168702 7599 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-plhx7 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 13 01:13:48.168999 master-0 kubenswrapper[7599]: I0313 01:13:48.168814 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" podUID="b5757329-8692-4719-b3c7-b5df78110fcf" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 13 01:13:48.757367 master-0 kubenswrapper[7599]: I0313 01:13:48.757289 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:13:48.757367 master-0 kubenswrapper[7599]: I0313 01:13:48.757364 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:13:49.113421 master-0 kubenswrapper[7599]: I0313 01:13:49.113360 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/1.log" Mar 13 01:13:49.113660 master-0 kubenswrapper[7599]: I0313 01:13:49.113425 7599 generic.go:334] "Generic (PLEG): container finished" podID="74efa52b-fd97-418a-9a44-914442633f74" containerID="e36d289d22f168d7dd54b3be83741c3fa40edda0e8989b419788c91296bea849" exitCode=1 Mar 13 01:13:49.113660 master-0 kubenswrapper[7599]: I0313 01:13:49.113482 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" event={"ID":"74efa52b-fd97-418a-9a44-914442633f74","Type":"ContainerDied","Data":"e36d289d22f168d7dd54b3be83741c3fa40edda0e8989b419788c91296bea849"} Mar 13 01:13:49.114891 master-0 kubenswrapper[7599]: I0313 01:13:49.114858 7599 scope.go:117] "RemoveContainer" containerID="e36d289d22f168d7dd54b3be83741c3fa40edda0e8989b419788c91296bea849" Mar 13 01:13:49.590588 master-0 kubenswrapper[7599]: I0313 01:13:49.590525 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:13:49.590588 master-0 kubenswrapper[7599]: I0313 01:13:49.590583 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:13:49.651455 master-0 kubenswrapper[7599]: I0313 01:13:49.651363 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:13:49.793398 master-0 kubenswrapper[7599]: I0313 01:13:49.793292 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-jzlpt" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="registry-server" probeResult="failure" output=< Mar 13 01:13:49.793398 master-0 kubenswrapper[7599]: timeout: failed to connect service ":50051" within 1s Mar 13 01:13:49.793398 master-0 kubenswrapper[7599]: > Mar 13 01:13:50.124237 master-0 kubenswrapper[7599]: I0313 01:13:50.124132 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/1.log" Mar 13 01:13:50.124656 master-0 kubenswrapper[7599]: I0313 01:13:50.124282 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" event={"ID":"74efa52b-fd97-418a-9a44-914442633f74","Type":"ContainerStarted","Data":"9c0bd715b837c01a89df34dba5a1abd4f477608efb9ac5a6df89d6b122c0876b"} Mar 13 01:13:50.949375 master-0 kubenswrapper[7599]: I0313 01:13:50.949243 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:50.949375 master-0 kubenswrapper[7599]: I0313 01:13:50.949331 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:13:51.939338 master-0 kubenswrapper[7599]: I0313 01:13:51.939274 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:51.939338 master-0 kubenswrapper[7599]: I0313 01:13:51.939341 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:51.988789 master-0 kubenswrapper[7599]: I0313 01:13:51.988726 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-t88cc" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="registry-server" probeResult="failure" output=< Mar 13 01:13:51.988789 master-0 kubenswrapper[7599]: timeout: failed to connect service ":50051" within 1s Mar 13 01:13:51.988789 master-0 kubenswrapper[7599]: > Mar 13 01:13:51.989420 master-0 kubenswrapper[7599]: I0313 01:13:51.989386 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:52.424175 master-0 kubenswrapper[7599]: I0313 01:13:52.424062 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:13:53.216730 master-0 kubenswrapper[7599]: I0313 01:13:53.216609 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:13:53.403771 master-0 kubenswrapper[7599]: I0313 01:13:53.403656 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:13:55.076780 master-0 kubenswrapper[7599]: E0313 01:13:55.076657 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:13:55.226273 master-0 kubenswrapper[7599]: E0313 01:13:55.226156 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:13:56.158473 master-0 kubenswrapper[7599]: I0313 01:13:56.158389 7599 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="a665d6a554bcc038bf3cf3aa905f1884c4c54fb9c32ce798ba9ecbaf1bab11e0" exitCode=0 Mar 13 01:13:56.217501 master-0 kubenswrapper[7599]: I0313 01:13:56.217356 7599 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:13:58.168119 master-0 kubenswrapper[7599]: I0313 01:13:58.168027 7599 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-plhx7 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 13 01:13:58.168845 master-0 kubenswrapper[7599]: I0313 01:13:58.168119 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" podUID="b5757329-8692-4719-b3c7-b5df78110fcf" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 13 01:13:58.173390 master-0 kubenswrapper[7599]: I0313 01:13:58.173320 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 01:13:58.173604 master-0 kubenswrapper[7599]: I0313 01:13:58.173391 7599 generic.go:334] "Generic (PLEG): container finished" podID="354f29997baa583b6238f7de9108ee10" containerID="2a446f182b10829874f21b28a6050799a0e95cf3b7880d6db31740a7140ff67b" exitCode=137 Mar 13 01:13:58.213322 master-0 kubenswrapper[7599]: I0313 01:13:58.213263 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 01:13:58.213505 master-0 kubenswrapper[7599]: I0313 01:13:58.213378 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:13:58.327085 master-0 kubenswrapper[7599]: I0313 01:13:58.326972 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 01:13:58.327480 master-0 kubenswrapper[7599]: I0313 01:13:58.327453 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") pod \"354f29997baa583b6238f7de9108ee10\" (UID: \"354f29997baa583b6238f7de9108ee10\") " Mar 13 01:13:58.327797 master-0 kubenswrapper[7599]: I0313 01:13:58.327180 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir" (OuterVolumeSpecName: "data-dir") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:58.327914 master-0 kubenswrapper[7599]: I0313 01:13:58.327654 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs" (OuterVolumeSpecName: "certs") pod "354f29997baa583b6238f7de9108ee10" (UID: "354f29997baa583b6238f7de9108ee10"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:13:58.328269 master-0 kubenswrapper[7599]: I0313 01:13:58.328238 7599 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:58.328465 master-0 kubenswrapper[7599]: I0313 01:13:58.328387 7599 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/host-path/354f29997baa583b6238f7de9108ee10-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 01:13:58.824194 master-0 kubenswrapper[7599]: I0313 01:13:58.824077 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:13:58.891259 master-0 kubenswrapper[7599]: I0313 01:13:58.891160 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:13:58.996094 master-0 kubenswrapper[7599]: I0313 01:13:58.995969 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="354f29997baa583b6238f7de9108ee10" path="/var/lib/kubelet/pods/354f29997baa583b6238f7de9108ee10/volumes" Mar 13 01:13:58.996895 master-0 kubenswrapper[7599]: I0313 01:13:58.996839 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 01:13:59.089008 master-0 kubenswrapper[7599]: E0313 01:13:59.088840 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 01:13:59.193409 master-0 kubenswrapper[7599]: I0313 01:13:59.193297 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0-master-0_354f29997baa583b6238f7de9108ee10/etcdctl/0.log" Mar 13 01:13:59.194164 master-0 kubenswrapper[7599]: I0313 01:13:59.193654 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:14:00.201153 master-0 kubenswrapper[7599]: I0313 01:14:00.201050 7599 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="03b6f556b130d09fe1680dbfd846eba4b3a8ef627f216c08cf30ba1c6140ea1c" exitCode=0 Mar 13 01:14:02.088877 master-0 kubenswrapper[7599]: E0313 01:14:02.087935 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0-master-0.189c4188904bb302 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0-master-0,UID:354f29997baa583b6238f7de9108ee10,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:13:28.069370626 +0000 UTC m=+67.341050020,LastTimestamp:2026-03-13 01:13:28.069370626 +0000 UTC m=+67.341050020,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:14:05.077502 master-0 kubenswrapper[7599]: E0313 01:14:05.077386 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:05.226752 master-0 kubenswrapper[7599]: E0313 01:14:05.226651 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:06.217036 master-0 kubenswrapper[7599]: I0313 01:14:06.216921 7599 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:07.245835 master-0 kubenswrapper[7599]: I0313 01:14:07.245777 7599 generic.go:334] "Generic (PLEG): container finished" podID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerID="f73c75626f2b8420b208819100f67cc78e1afc63da934e6341110ce6fd48cd90" exitCode=0 Mar 13 01:14:08.168117 master-0 kubenswrapper[7599]: I0313 01:14:08.167996 7599 patch_prober.go:28] interesting pod/authentication-operator-7c6989d6c4-plhx7 container/authentication-operator namespace/openshift-authentication-operator: Liveness probe status=failure output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" start-of-body= Mar 13 01:14:08.168117 master-0 kubenswrapper[7599]: I0313 01:14:08.168072 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" podUID="b5757329-8692-4719-b3c7-b5df78110fcf" containerName="authentication-operator" probeResult="failure" output="Get \"https://10.128.0.17:8443/healthz\": dial tcp 10.128.0.17:8443: connect: connection refused" Mar 13 01:14:13.209843 master-0 kubenswrapper[7599]: E0313 01:14:13.209735 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 01:14:13.291824 master-0 kubenswrapper[7599]: I0313 01:14:13.291742 7599 generic.go:334] "Generic (PLEG): container finished" podID="c6db75e5-efd1-4bfa-9941-0934d7621ba2" containerID="c248d157af93f66dc74e732d276f334cdb9f66f93ff85dda8f8ef75466a1cda2" exitCode=0 Mar 13 01:14:14.302219 master-0 kubenswrapper[7599]: I0313 01:14:14.302148 7599 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="dc0cc2d6bf9be0a194a0217c205d2ab79cbfb7d5acd7c9e8902600ce17ed4649" exitCode=0 Mar 13 01:14:14.304337 master-0 kubenswrapper[7599]: I0313 01:14:14.304295 7599 generic.go:334] "Generic (PLEG): container finished" podID="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" containerID="db75a500d25df1d35034bc9e7d835e3af06e992e3af2605476ce0e45095ba6b9" exitCode=0 Mar 13 01:14:15.078277 master-0 kubenswrapper[7599]: E0313 01:14:15.078166 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:15.228216 master-0 kubenswrapper[7599]: E0313 01:14:15.228072 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:15.315533 master-0 kubenswrapper[7599]: I0313 01:14:15.315457 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-mcps9_c687237e-50e5-405d-8fef-0efbc3866630/approver/0.log" Mar 13 01:14:15.316705 master-0 kubenswrapper[7599]: I0313 01:14:15.316657 7599 generic.go:334] "Generic (PLEG): container finished" podID="c687237e-50e5-405d-8fef-0efbc3866630" containerID="826ddf0fad5a47b74a9e97796304f54274bf436e1dab02b9917102d0ced785b8" exitCode=1 Mar 13 01:14:15.322482 master-0 kubenswrapper[7599]: I0313 01:14:15.322401 7599 generic.go:334] "Generic (PLEG): container finished" podID="fbfc2caf-126e-41b9-9b31-05f7a45d8536" containerID="5436fbc43037209189594bd015e39350294b9b8da6b6096cb145d36bfb03543f" exitCode=0 Mar 13 01:14:15.327573 master-0 kubenswrapper[7599]: I0313 01:14:15.327540 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-4zrk7_dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/network-operator/0.log" Mar 13 01:14:15.327845 master-0 kubenswrapper[7599]: I0313 01:14:15.327807 7599 generic.go:334] "Generic (PLEG): container finished" podID="dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc" containerID="7f4c53a355951175886abfb80eb4256c32b51f0ad7d9c970345c8e4c70d93ccb" exitCode=255 Mar 13 01:14:16.216990 master-0 kubenswrapper[7599]: I0313 01:14:16.216792 7599 prober.go:107] "Probe failed" probeType="Startup" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:18.350115 master-0 kubenswrapper[7599]: I0313 01:14:18.349978 7599 generic.go:334] "Generic (PLEG): container finished" podID="b5757329-8692-4719-b3c7-b5df78110fcf" containerID="9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d" exitCode=0 Mar 13 01:14:23.382372 master-0 kubenswrapper[7599]: I0313 01:14:23.382296 7599 generic.go:334] "Generic (PLEG): container finished" podID="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" containerID="b30ae4d37e850868384d04498318b52f585a63274ae43d082fa8cb4389cea8b3" exitCode=0 Mar 13 01:14:25.079127 master-0 kubenswrapper[7599]: E0313 01:14:25.079044 7599 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:25.079127 master-0 kubenswrapper[7599]: I0313 01:14:25.079118 7599 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 01:14:25.230008 master-0 kubenswrapper[7599]: E0313 01:14:25.229878 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:25.230008 master-0 kubenswrapper[7599]: E0313 01:14:25.229957 7599 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 01:14:28.419546 master-0 kubenswrapper[7599]: I0313 01:14:28.419400 7599 generic.go:334] "Generic (PLEG): container finished" podID="96b67a99-eada-44d7-93eb-cc3ced777fc6" containerID="cc1038b189ab36843989b837c930bbf20934f08cf043e09fd788646b7d078f2a" exitCode=0 Mar 13 01:14:29.426418 master-0 kubenswrapper[7599]: I0313 01:14:29.426301 7599 generic.go:334] "Generic (PLEG): container finished" podID="fde89b0b-7133-4b97-9e35-51c0382bd366" containerID="aa8d570cc916b085b102875f5c8076691d32fc0570491e0ffdf16bc87e8e94b9" exitCode=0 Mar 13 01:14:31.843384 master-0 kubenswrapper[7599]: I0313 01:14:31.843283 7599 status_manager.go:851] "Failed to get status for pod" podUID="6c32e816-aa69-4e9c-9fbf-56595c764f3b" pod="openshift-kube-scheduler/installer-1-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-1-master-0)" Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: E0313 01:14:32.514179 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92" Netns:"/var/run/netns/fb44ef15-aff2-42d7-bc8e-2189a364c316" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: > Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: E0313 01:14:32.514247 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92" Netns:"/var/run/netns/fb44ef15-aff2-42d7-bc8e-2189a364c316" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: > pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: E0313 01:14:32.514266 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92" Netns:"/var/run/netns/fb44ef15-aff2-42d7-bc8e-2189a364c316" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: > pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:14:32.514387 master-0 kubenswrapper[7599]: E0313 01:14:32.514330 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-master-0_openshift-kube-apiserver(fdcd8438-d33f-490f-a841-8944c58506f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-master-0_openshift-kube-apiserver(fdcd8438-d33f-490f-a841-8944c58506f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92\\\" Netns:\\\"/var/run/netns/fb44ef15-aff2-42d7-bc8e-2189a364c316\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=3cae7aeae2effa17caee17f14423266cda421dbf886f37826f69ff6a4be5fb92;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-master-0" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" Mar 13 01:14:33.000803 master-0 kubenswrapper[7599]: E0313 01:14:33.000697 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:14:33.001634 master-0 kubenswrapper[7599]: E0313 01:14:33.001404 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.019s" Mar 13 01:14:33.001868 master-0 kubenswrapper[7599]: I0313 01:14:33.001816 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:14:33.001868 master-0 kubenswrapper[7599]: I0313 01:14:33.001850 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:14:33.002077 master-0 kubenswrapper[7599]: I0313 01:14:33.002041 7599 scope.go:117] "RemoveContainer" containerID="a665d6a554bcc038bf3cf3aa905f1884c4c54fb9c32ce798ba9ecbaf1bab11e0" Mar 13 01:14:33.003828 master-0 kubenswrapper[7599]: I0313 01:14:33.003770 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:14:33.003828 master-0 kubenswrapper[7599]: I0313 01:14:33.003817 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:14:33.005106 master-0 kubenswrapper[7599]: I0313 01:14:33.005063 7599 scope.go:117] "RemoveContainer" containerID="9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d" Mar 13 01:14:33.006383 master-0 kubenswrapper[7599]: I0313 01:14:33.006335 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173"} pod="kube-system/bootstrap-kube-controller-manager-master-0" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Mar 13 01:14:33.006545 master-0 kubenswrapper[7599]: I0313 01:14:33.006472 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173" gracePeriod=30 Mar 13 01:14:33.022435 master-0 kubenswrapper[7599]: I0313 01:14:33.021367 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 01:14:33.060978 master-0 kubenswrapper[7599]: I0313 01:14:33.060905 7599 scope.go:117] "RemoveContainer" containerID="2a446f182b10829874f21b28a6050799a0e95cf3b7880d6db31740a7140ff67b" Mar 13 01:14:33.457549 master-0 kubenswrapper[7599]: I0313 01:14:33.457449 7599 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173" exitCode=2 Mar 13 01:14:33.462936 master-0 kubenswrapper[7599]: I0313 01:14:33.462883 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:14:33.463446 master-0 kubenswrapper[7599]: I0313 01:14:33.463405 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:14:34.096675 master-0 kubenswrapper[7599]: E0313 01:14:34.096557 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.096675 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df" Netns:"/var/run/netns/a2b5723f-a078-49cd-b955-99bc35a25cfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.096675 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.096675 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: E0313 01:14:34.096693 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df" Netns:"/var/run/netns/a2b5723f-a078-49cd-b955-99bc35a25cfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: > pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: E0313 01:14:34.096739 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df" Netns:"/var/run/netns/a2b5723f-a078-49cd-b955-99bc35a25cfa" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: > pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:14:34.097890 master-0 kubenswrapper[7599]: E0313 01:14:34.096886 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-4-master-0_openshift-kube-scheduler(7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-4-master-0_openshift-kube-scheduler(7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df\\\" Netns:\\\"/var/run/netns/a2b5723f-a078-49cd-b955-99bc35a25cfa\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=51bb1de57fa4b4bb919581db1f1d02d5fe942642e7aedcbfda44f815bccaf4df;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler/installer-4-master-0" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Mar 13 01:14:34.177009 master-0 kubenswrapper[7599]: E0313 01:14:34.176947 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.177009 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b" Netns:"/var/run/netns/e35dc99e-6e36-4f56-8137-22aa39c66b67" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.177009 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.177009 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: E0313 01:14:34.177031 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b" Netns:"/var/run/netns/e35dc99e-6e36-4f56-8137-22aa39c66b67" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: E0313 01:14:34.177088 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b" Netns:"/var/run/netns/e35dc99e-6e36-4f56-8137-22aa39c66b67" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:14:34.177237 master-0 kubenswrapper[7599]: E0313 01:14:34.177170 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api(56e20b21-ba17-46ae-a740-0e7bd45eae5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api(56e20b21-ba17-46ae-a740-0e7bd45eae5f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b\\\" Netns:\\\"/var/run/netns/e35dc99e-6e36-4f56-8137-22aa39c66b67\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=3357dfb2683b66aac0e8458a3ddc52457b4eb230e65056e4a66634a9fda9492b;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" podUID="56e20b21-ba17-46ae-a740-0e7bd45eae5f" Mar 13 01:14:34.189355 master-0 kubenswrapper[7599]: E0313 01:14:34.189306 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.189355 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a" Netns:"/var/run/netns/7d826888-de19-484f-8172-dbb0296c4c54" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.189355 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.189355 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: E0313 01:14:34.189383 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a" Netns:"/var/run/netns/7d826888-de19-484f-8172-dbb0296c4c54" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: E0313 01:14:34.189403 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a" Netns:"/var/run/netns/7d826888-de19-484f-8172-dbb0296c4c54" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:14:34.189551 master-0 kubenswrapper[7599]: E0313 01:14:34.189456 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api(2581e5b5-8cbb-4fa5-9888-98fb572a6232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api(2581e5b5-8cbb-4fa5-9888-98fb572a6232)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a\\\" Netns:\\\"/var/run/netns/7d826888-de19-484f-8172-dbb0296c4c54\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=05f9430cbfa4a26f4ca0289ad8f9d2a441a2337fe0b1144b299b4105acc6c51a;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" podUID="2581e5b5-8cbb-4fa5-9888-98fb572a6232" Mar 13 01:14:34.405344 master-0 kubenswrapper[7599]: E0313 01:14:34.405285 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.405344 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f" Netns:"/var/run/netns/aeadbdfe-5c46-493b-89e3-bcbd5e0311a9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.405344 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.405344 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: E0313 01:14:34.405361 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f" Netns:"/var/run/netns/aeadbdfe-5c46-493b-89e3-bcbd5e0311a9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: E0313 01:14:34.405382 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f" Netns:"/var/run/netns/aeadbdfe-5c46-493b-89e3-bcbd5e0311a9" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s": context deadline exceeded (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:14:34.405524 master-0 kubenswrapper[7599]: E0313 01:14:34.405451 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f\\\" Netns:\\\"/var/run/netns/aeadbdfe-5c46-493b-89e3-bcbd5e0311a9\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=9edcd8a9610adce5e83d1265fc210562ec189e623438115b515ba454aa10df4f;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s\\\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" podUID="21110b48-25fc-434a-b156-7f6bd6064bed" Mar 13 01:14:34.409772 master-0 kubenswrapper[7599]: E0313 01:14:34.409709 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.409772 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6" Netns:"/var/run/netns/7f87e580-3a1c-4182-afce-4e3162387cb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.409772 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.409772 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: E0313 01:14:34.409793 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6" Netns:"/var/run/netns/7f87e580-3a1c-4182-afce-4e3162387cb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: > pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: E0313 01:14:34.409816 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6" Netns:"/var/run/netns/7f87e580-3a1c-4182-afce-4e3162387cb4" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: > pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:14:34.409918 master-0 kubenswrapper[7599]: E0313 01:14:34.409868 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-8f89dfddd-hn4jh_openshift-insights(6e799871-735a-44e8-8193-24c5bb388928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-8f89dfddd-hn4jh_openshift-insights(6e799871-735a-44e8-8193-24c5bb388928)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6\\\" Netns:\\\"/var/run/netns/7f87e580-3a1c-4182-afce-4e3162387cb4\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=560b44b2574a08e6d117ce9546acdc25e3d4a5f8b8c021313981d486eb804ff6;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" podUID="6e799871-735a-44e8-8193-24c5bb388928" Mar 13 01:14:34.423112 master-0 kubenswrapper[7599]: E0313 01:14:34.422952 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.423112 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163" Netns:"/var/run/netns/f26d12dd-a4d5-4119-94d2-505ed37c80d8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.423112 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.423112 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: E0313 01:14:34.423134 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163" Netns:"/var/run/netns/f26d12dd-a4d5-4119-94d2-505ed37c80d8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: E0313 01:14:34.423161 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163" Netns:"/var/run/netns/f26d12dd-a4d5-4119-94d2-505ed37c80d8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.423293 master-0 kubenswrapper[7599]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:14:34.424046 master-0 kubenswrapper[7599]: E0313 01:14:34.423987 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator(65dd1dc7-1b90-40f6-82c9-dee90a1fa852)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator(65dd1dc7-1b90-40f6-82c9-dee90a1fa852)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163\\\" Netns:\\\"/var/run/netns/f26d12dd-a4d5-4119-94d2-505ed37c80d8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=c609714d7fee73262d7a4def58199003ec18bdf7e4f9eab2a934912bc9bca163;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" podUID="65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Mar 13 01:14:34.476611 master-0 kubenswrapper[7599]: I0313 01:14:34.473995 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:14:34.476611 master-0 kubenswrapper[7599]: I0313 01:14:34.474048 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:14:34.476611 master-0 kubenswrapper[7599]: I0313 01:14:34.474289 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:14:34.476611 master-0 kubenswrapper[7599]: I0313 01:14:34.474344 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:14:34.476611 master-0 kubenswrapper[7599]: I0313 01:14:34.476455 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:14:34.476611 master-0 kubenswrapper[7599]: I0313 01:14:34.476567 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:14:34.480034 master-0 kubenswrapper[7599]: I0313 01:14:34.474354 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:14:34.480034 master-0 kubenswrapper[7599]: I0313 01:14:34.474597 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:14:34.480034 master-0 kubenswrapper[7599]: I0313 01:14:34.474008 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:14:34.480034 master-0 kubenswrapper[7599]: I0313 01:14:34.477502 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:14:34.480034 master-0 kubenswrapper[7599]: I0313 01:14:34.477909 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:14:34.480034 master-0 kubenswrapper[7599]: I0313 01:14:34.478081 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:14:34.543339 master-0 kubenswrapper[7599]: E0313 01:14:34.543230 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.543339 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11" Netns:"/var/run/netns/eee02198-cd36-4fe3-8bf2-27123c556ec5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.543339 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.543339 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: E0313 01:14:34.543372 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11" Netns:"/var/run/netns/eee02198-cd36-4fe3-8bf2-27123c556ec5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: E0313 01:14:34.543408 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11" Netns:"/var/run/netns/eee02198-cd36-4fe3-8bf2-27123c556ec5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:14:34.543893 master-0 kubenswrapper[7599]: E0313 01:14:34.543541 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(7106c6fe-7c8d-45b9-bc5c-521db743663f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(7106c6fe-7c8d-45b9-bc5c-521db743663f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11\\\" Netns:\\\"/var/run/netns/eee02198-cd36-4fe3-8bf2-27123c556ec5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=a910160f9667f0c74e9934de69da8c0495e04c8fde283234a26694100044bd11;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: E0313 01:14:34.550474 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd" Netns:"/var/run/netns/3fbc8ae2-ea4d-4888-a07e-581c9455a0e5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: E0313 01:14:34.550580 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd" Netns:"/var/run/netns/3fbc8ae2-ea4d-4888-a07e-581c9455a0e5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: E0313 01:14:34.550607 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd" Netns:"/var/run/netns/3fbc8ae2-ea4d-4888-a07e-581c9455a0e5" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:14:34.550870 master-0 kubenswrapper[7599]: E0313 01:14:34.550685 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator(778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator(778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd\\\" Netns:\\\"/var/run/netns/3fbc8ae2-ea4d-4888-a07e-581c9455a0e5\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6bbf13529e492e0c2b1faf3d4732c97da7e619ce4c19a6e7fdfeac21c359f6cd;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" podUID="778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: E0313 01:14:34.567251 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf" Netns:"/var/run/netns/5168700e-b6f7-4781-9386-dfe1b07b166d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: > Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: E0313 01:14:34.567388 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf" Netns:"/var/run/netns/5168700e-b6f7-4781-9386-dfe1b07b166d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: E0313 01:14:34.567437 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf" Netns:"/var/run/netns/5168700e-b6f7-4781-9386-dfe1b07b166d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:14:34.568609 master-0 kubenswrapper[7599]: E0313 01:14:34.567567 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator(65ef9aae-25a5-46c6-adf3-634f8f7a29bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator(65ef9aae-25a5-46c6-adf3-634f8f7a29bc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf\\\" Netns:\\\"/var/run/netns/5168700e-b6f7-4781-9386-dfe1b07b166d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=dc83034c18d885dd0f73ef1a9d61f8609b93a659b8e8ed26c025ede658020bbf;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" podUID="65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Mar 13 01:14:35.079589 master-0 kubenswrapper[7599]: E0313 01:14:35.079474 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 01:14:35.480858 master-0 kubenswrapper[7599]: I0313 01:14:35.480775 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:14:35.480858 master-0 kubenswrapper[7599]: I0313 01:14:35.480853 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:14:35.481821 master-0 kubenswrapper[7599]: I0313 01:14:35.480861 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:14:35.481821 master-0 kubenswrapper[7599]: I0313 01:14:35.481319 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:14:35.481821 master-0 kubenswrapper[7599]: I0313 01:14:35.481711 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:14:35.481821 master-0 kubenswrapper[7599]: I0313 01:14:35.481759 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:14:36.092225 master-0 kubenswrapper[7599]: E0313 01:14:36.091764 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{route-controller-manager-748966cb9f-wnsx7.189c4189700c2fa4 openshift-route-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-route-controller-manager,Name:route-controller-manager-748966cb9f-wnsx7,UID:d3a666ab-7b35-463e-b5fa-ecaa147296e8,APIVersion:v1,ResourceVersion:7418,FieldPath:spec.containers{route-controller-manager},},Reason:Pulled,Message:Successfully pulled image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\" in 24.635s (24.635s including waiting). Image size: 487090672 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:13:31.823304612 +0000 UTC m=+71.094984016,LastTimestamp:2026-03-13 01:13:31.823304612 +0000 UTC m=+71.094984016,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:14:37.265428 master-0 kubenswrapper[7599]: I0313 01:14:37.265350 7599 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-8r87t container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 01:14:37.266118 master-0 kubenswrapper[7599]: I0313 01:14:37.265434 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" podUID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 01:14:45.281360 master-0 kubenswrapper[7599]: E0313 01:14:45.281257 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 01:14:45.382147 master-0 kubenswrapper[7599]: E0313 01:14:45.381784 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:14:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:14:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:14:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:14:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3f3a3fc0144fd075212160b467722ab529c42c226d7e87d397f821c8e7df8628\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ec7e570be8cf0476a38d4db98b0455d5b94538b5b7b2ddb3b7d8f12c724c6ddb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284752601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:50fe533376cf6d45ae7e343c58d9c480fb1bc96859ffbbdc51ce2c428de2b653\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd2f5e111c85cdeeff92a61f881c260de30a26d2d9938eef43024e637422abaa\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d\\\"],\\\"sizeBytes\\\":467234714},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:55.382540 master-0 kubenswrapper[7599]: E0313 01:14:55.382375 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:14:55.682878 master-0 kubenswrapper[7599]: E0313 01:14:55.682615 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 01:15:05.383681 master-0 kubenswrapper[7599]: E0313 01:15:05.383624 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:15:06.484118 master-0 kubenswrapper[7599]: E0313 01:15:06.484031 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 13 01:15:07.024902 master-0 kubenswrapper[7599]: E0313 01:15:07.024785 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:15:07.025209 master-0 kubenswrapper[7599]: E0313 01:15:07.025132 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.021s" Mar 13 01:15:07.025209 master-0 kubenswrapper[7599]: I0313 01:15:07.025174 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"03b6f556b130d09fe1680dbfd846eba4b3a8ef627f216c08cf30ba1c6140ea1c"} Mar 13 01:15:07.026271 master-0 kubenswrapper[7599]: I0313 01:15:07.026232 7599 scope.go:117] "RemoveContainer" containerID="7f4c53a355951175886abfb80eb4256c32b51f0ad7d9c970345c8e4c70d93ccb" Mar 13 01:15:07.027821 master-0 kubenswrapper[7599]: I0313 01:15:07.027261 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:15:07.027821 master-0 kubenswrapper[7599]: I0313 01:15:07.027323 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerDied","Data":"f73c75626f2b8420b208819100f67cc78e1afc63da934e6341110ce6fd48cd90"} Mar 13 01:15:07.027821 master-0 kubenswrapper[7599]: I0313 01:15:07.027355 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" event={"ID":"c6db75e5-efd1-4bfa-9941-0934d7621ba2","Type":"ContainerDied","Data":"c248d157af93f66dc74e732d276f334cdb9f66f93ff85dda8f8ef75466a1cda2"} Mar 13 01:15:07.027821 master-0 kubenswrapper[7599]: I0313 01:15:07.027568 7599 scope.go:117] "RemoveContainer" containerID="f73c75626f2b8420b208819100f67cc78e1afc63da934e6341110ce6fd48cd90" Mar 13 01:15:07.027821 master-0 kubenswrapper[7599]: I0313 01:15:07.027645 7599 scope.go:117] "RemoveContainer" containerID="b30ae4d37e850868384d04498318b52f585a63274ae43d082fa8cb4389cea8b3" Mar 13 01:15:07.028141 master-0 kubenswrapper[7599]: I0313 01:15:07.028099 7599 scope.go:117] "RemoveContainer" containerID="cc1038b189ab36843989b837c930bbf20934f08cf043e09fd788646b7d078f2a" Mar 13 01:15:07.028320 master-0 kubenswrapper[7599]: I0313 01:15:07.028264 7599 scope.go:117] "RemoveContainer" containerID="826ddf0fad5a47b74a9e97796304f54274bf436e1dab02b9917102d0ced785b8" Mar 13 01:15:07.028449 master-0 kubenswrapper[7599]: I0313 01:15:07.028417 7599 scope.go:117] "RemoveContainer" containerID="db75a500d25df1d35034bc9e7d835e3af06e992e3af2605476ce0e45095ba6b9" Mar 13 01:15:07.028849 master-0 kubenswrapper[7599]: I0313 01:15:07.028825 7599 scope.go:117] "RemoveContainer" containerID="5436fbc43037209189594bd015e39350294b9b8da6b6096cb145d36bfb03543f" Mar 13 01:15:07.029196 master-0 kubenswrapper[7599]: I0313 01:15:07.029160 7599 scope.go:117] "RemoveContainer" containerID="c248d157af93f66dc74e732d276f334cdb9f66f93ff85dda8f8ef75466a1cda2" Mar 13 01:15:07.039065 master-0 kubenswrapper[7599]: I0313 01:15:07.039035 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 01:15:07.777055 master-0 kubenswrapper[7599]: I0313 01:15:07.776970 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-mcps9_c687237e-50e5-405d-8fef-0efbc3866630/approver/0.log" Mar 13 01:15:07.793543 master-0 kubenswrapper[7599]: I0313 01:15:07.793462 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-4zrk7_dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/network-operator/0.log" Mar 13 01:15:10.095860 master-0 kubenswrapper[7599]: E0313 01:15:10.095670 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{community-operators-jzlpt.189c4189b4c257b0 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:community-operators-jzlpt,UID:40c57f94-16b7-4011-bc29-386d52a06d2a,APIVersion:v1,ResourceVersion:6858,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/community-operator-index:v4.18\" in 32.918s (32.918s including waiting). Image size: 1221745878 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:13:32.976093104 +0000 UTC m=+72.247772498,LastTimestamp:2026-03-13 01:13:32.976093104 +0000 UTC m=+72.247772498,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:15:15.384819 master-0 kubenswrapper[7599]: E0313 01:15:15.384615 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:15:15.848183 master-0 kubenswrapper[7599]: I0313 01:15:15.848110 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-p5c8r_75a53c09-210a-4346-99b0-a632b9e0a3c9/ingress-operator/0.log" Mar 13 01:15:15.848422 master-0 kubenswrapper[7599]: I0313 01:15:15.848204 7599 generic.go:334] "Generic (PLEG): container finished" podID="75a53c09-210a-4346-99b0-a632b9e0a3c9" containerID="951aa4d6803ad0268be9d58f3b51ebac5555d4f85866ee29a2837692062094ee" exitCode=1 Mar 13 01:15:18.085114 master-0 kubenswrapper[7599]: E0313 01:15:18.085013 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="3.2s" Mar 13 01:15:19.875914 master-0 kubenswrapper[7599]: I0313 01:15:19.875848 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/2.log" Mar 13 01:15:19.876900 master-0 kubenswrapper[7599]: I0313 01:15:19.876737 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/1.log" Mar 13 01:15:19.876900 master-0 kubenswrapper[7599]: I0313 01:15:19.876779 7599 generic.go:334] "Generic (PLEG): container finished" podID="74efa52b-fd97-418a-9a44-914442633f74" containerID="9c0bd715b837c01a89df34dba5a1abd4f477608efb9ac5a6df89d6b122c0876b" exitCode=255 Mar 13 01:15:20.037123 master-0 kubenswrapper[7599]: E0313 01:15:20.036998 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 01:15:25.385814 master-0 kubenswrapper[7599]: E0313 01:15:25.385708 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:15:25.385814 master-0 kubenswrapper[7599]: E0313 01:15:25.385768 7599 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 01:15:31.286131 master-0 kubenswrapper[7599]: E0313 01:15:31.285976 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 13 01:15:31.848466 master-0 kubenswrapper[7599]: I0313 01:15:31.848352 7599 status_manager.go:851] "Failed to get status for pod" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods machine-approver-955fcfb87-56dsn)" Mar 13 01:15:32.998405 master-0 kubenswrapper[7599]: I0313 01:15:32.998323 7599 generic.go:334] "Generic (PLEG): container finished" podID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerID="94468d369b5f43adf08abc9d6a6230238254bef0eb81d4e6a3d5e925f29bcc13" exitCode=0 Mar 13 01:15:34.290297 master-0 kubenswrapper[7599]: E0313 01:15:34.290217 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:34.290297 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394" Netns:"/var/run/netns/08c08969-ce96-493e-b214-c2e862900454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:34.290297 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:34.290297 master-0 kubenswrapper[7599]: > Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: E0313 01:15:34.290313 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394" Netns:"/var/run/netns/08c08969-ce96-493e-b214-c2e862900454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: > pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: E0313 01:15:34.290333 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394" Netns:"/var/run/netns/08c08969-ce96-493e-b214-c2e862900454" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: > pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:15:34.291051 master-0 kubenswrapper[7599]: E0313 01:15:34.290390 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-master-0_openshift-kube-apiserver(fdcd8438-d33f-490f-a841-8944c58506f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-master-0_openshift-kube-apiserver(fdcd8438-d33f-490f-a841-8944c58506f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394\\\" Netns:\\\"/var/run/netns/08c08969-ce96-493e-b214-c2e862900454\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=35320f8b9dc42609d258a43d476239088e6b138f1eeaaa2864d41e879d59a394;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-master-0" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" Mar 13 01:15:35.014058 master-0 kubenswrapper[7599]: I0313 01:15:35.013996 7599 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0" exitCode=1 Mar 13 01:15:35.014269 master-0 kubenswrapper[7599]: I0313 01:15:35.014099 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:15:35.014653 master-0 kubenswrapper[7599]: I0313 01:15:35.014619 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: E0313 01:15:35.367862 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74" Netns:"/var/run/netns/7b647cd7-f46f-429a-a57e-7be2aec0eb4e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: > Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: E0313 01:15:35.367979 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74" Netns:"/var/run/netns/7b647cd7-f46f-429a-a57e-7be2aec0eb4e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: E0313 01:15:35.368031 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74" Netns:"/var/run/netns/7b647cd7-f46f-429a-a57e-7be2aec0eb4e" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:15:35.368510 master-0 kubenswrapper[7599]: E0313 01:15:35.368175 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api(56e20b21-ba17-46ae-a740-0e7bd45eae5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api(56e20b21-ba17-46ae-a740-0e7bd45eae5f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74\\\" Netns:\\\"/var/run/netns/7b647cd7-f46f-429a-a57e-7be2aec0eb4e\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=6527aebd8b8d197c94a279ac10a182297ead4b09f2c63ca1b847680ff6051f74;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" podUID="56e20b21-ba17-46ae-a740-0e7bd45eae5f" Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: E0313 01:15:35.474216 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04" Netns:"/var/run/netns/8c106d23-a57b-4a7a-a5b4-9250188f5abd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: > Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: E0313 01:15:35.474307 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04" Netns:"/var/run/netns/8c106d23-a57b-4a7a-a5b4-9250188f5abd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: E0313 01:15:35.474342 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04" Netns:"/var/run/netns/8c106d23-a57b-4a7a-a5b4-9250188f5abd" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:15:35.474646 master-0 kubenswrapper[7599]: E0313 01:15:35.474432 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api(2581e5b5-8cbb-4fa5-9888-98fb572a6232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api(2581e5b5-8cbb-4fa5-9888-98fb572a6232)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04\\\" Netns:\\\"/var/run/netns/8c106d23-a57b-4a7a-a5b4-9250188f5abd\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=b91c042123b1a1b63e451e152f1ae6902993005b39c69198ee947673fb843e04;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" podUID="2581e5b5-8cbb-4fa5-9888-98fb572a6232" Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: E0313 01:15:35.480570 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a" Netns:"/var/run/netns/f42f30d8-b124-469a-b999-3feda8a87a25" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: > Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: E0313 01:15:35.480641 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a" Netns:"/var/run/netns/f42f30d8-b124-469a-b999-3feda8a87a25" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: > pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: E0313 01:15:35.480672 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a" Netns:"/var/run/netns/f42f30d8-b124-469a-b999-3feda8a87a25" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: > pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:15:35.480929 master-0 kubenswrapper[7599]: E0313 01:15:35.480752 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-4-master-0_openshift-kube-scheduler(7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-4-master-0_openshift-kube-scheduler(7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a\\\" Netns:\\\"/var/run/netns/f42f30d8-b124-469a-b999-3feda8a87a25\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=6b5a359a85526f8685577ded7e0e386224d453c40e7fc35f9afb82d554a70b6a;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler/installer-4-master-0" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: E0313 01:15:35.486750 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb" Netns:"/var/run/netns/9710046e-ad8d-46a7-bc44-1bcbc38ab0c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-5cdb4c5598-5dvnt) Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: > Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: E0313 01:15:35.486804 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb" Netns:"/var/run/netns/9710046e-ad8d-46a7-bc44-1bcbc38ab0c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-5cdb4c5598-5dvnt) Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: E0313 01:15:35.486836 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb" Netns:"/var/run/netns/9710046e-ad8d-46a7-bc44-1bcbc38ab0c3" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-5cdb4c5598-5dvnt) Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:15:35.487285 master-0 kubenswrapper[7599]: E0313 01:15:35.486968 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb\\\" Netns:\\\"/var/run/netns/9710046e-ad8d-46a7-bc44-1bcbc38ab0c3\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=a8cf7768b9ad38b1223026bf4708344036bc753b692c1e377df30b0530ec91fb;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods cluster-baremetal-operator-5cdb4c5598-5dvnt)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" podUID="21110b48-25fc-434a-b156-7f6bd6064bed" Mar 13 01:15:35.613960 master-0 kubenswrapper[7599]: E0313 01:15:35.613901 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:35.613960 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8" Netns:"/var/run/netns/4c249a7c-3ba5-48ac-a20c-06c4c59ff27c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods insights-operator-8f89dfddd-hn4jh) Mar 13 01:15:35.613960 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.613960 master-0 kubenswrapper[7599]: > Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: E0313 01:15:35.613974 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8" Netns:"/var/run/netns/4c249a7c-3ba5-48ac-a20c-06c4c59ff27c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods insights-operator-8f89dfddd-hn4jh) Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: > pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: E0313 01:15:35.613999 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8" Netns:"/var/run/netns/4c249a7c-3ba5-48ac-a20c-06c4c59ff27c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods insights-operator-8f89dfddd-hn4jh) Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: > pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:15:35.614244 master-0 kubenswrapper[7599]: E0313 01:15:35.614081 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-8f89dfddd-hn4jh_openshift-insights(6e799871-735a-44e8-8193-24c5bb388928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-8f89dfddd-hn4jh_openshift-insights(6e799871-735a-44e8-8193-24c5bb388928)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8\\\" Netns:\\\"/var/run/netns/4c249a7c-3ba5-48ac-a20c-06c4c59ff27c\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=bcf4457aac97bb1fe35bb4c5b0bc3fcaed15f0f1eda2d3a50215f638223338f8;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods insights-operator-8f89dfddd-hn4jh)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" podUID="6e799871-735a-44e8-8193-24c5bb388928" Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: E0313 01:15:35.634020 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281" Netns:"/var/run/netns/db885e4b-b80c-4485-8a98-b4e1751e2e91" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: > Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: E0313 01:15:35.634082 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281" Netns:"/var/run/netns/db885e4b-b80c-4485-8a98-b4e1751e2e91" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.634084 master-0 kubenswrapper[7599]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:15:35.634698 master-0 kubenswrapper[7599]: E0313 01:15:35.634102 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:35.634698 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281" Netns:"/var/run/netns/db885e4b-b80c-4485-8a98-b4e1751e2e91" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:35.634698 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:35.634698 master-0 kubenswrapper[7599]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:15:35.634698 master-0 kubenswrapper[7599]: E0313 01:15:35.634166 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator(65dd1dc7-1b90-40f6-82c9-dee90a1fa852)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator(65dd1dc7-1b90-40f6-82c9-dee90a1fa852)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281\\\" Netns:\\\"/var/run/netns/db885e4b-b80c-4485-8a98-b4e1751e2e91\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=4400ef813d2a0356d261556145f9355e7d4de794ad21bd71d6eacaa9a7b97281;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" podUID="65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Mar 13 01:15:36.019357 master-0 kubenswrapper[7599]: I0313 01:15:36.019271 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:15:36.019357 master-0 kubenswrapper[7599]: I0313 01:15:36.019313 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:15:36.019761 master-0 kubenswrapper[7599]: I0313 01:15:36.019394 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:15:36.019761 master-0 kubenswrapper[7599]: I0313 01:15:36.019394 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:15:36.019761 master-0 kubenswrapper[7599]: I0313 01:15:36.019281 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:15:36.019761 master-0 kubenswrapper[7599]: I0313 01:15:36.019611 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:15:36.020070 master-0 kubenswrapper[7599]: I0313 01:15:36.020024 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:15:36.020413 master-0 kubenswrapper[7599]: I0313 01:15:36.020353 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:15:36.020413 master-0 kubenswrapper[7599]: I0313 01:15:36.020391 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:15:36.020672 master-0 kubenswrapper[7599]: I0313 01:15:36.020585 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:15:36.020940 master-0 kubenswrapper[7599]: I0313 01:15:36.020876 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:15:36.021068 master-0 kubenswrapper[7599]: I0313 01:15:36.021033 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:15:36.273747 master-0 kubenswrapper[7599]: E0313 01:15:36.273681 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:36.273747 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53" Netns:"/var/run/netns/1f586235-78ea-4a4e-ac8b-e77cbbab3fd1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.273747 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.273747 master-0 kubenswrapper[7599]: > Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: E0313 01:15:36.273758 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53" Netns:"/var/run/netns/1f586235-78ea-4a4e-ac8b-e77cbbab3fd1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: E0313 01:15:36.273792 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53" Netns:"/var/run/netns/1f586235-78ea-4a4e-ac8b-e77cbbab3fd1" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:15:36.273963 master-0 kubenswrapper[7599]: E0313 01:15:36.273892 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator(778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator(778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53\\\" Netns:\\\"/var/run/netns/1f586235-78ea-4a4e-ac8b-e77cbbab3fd1\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6625774e5798a33f73e764192ab1b90a80f58ddad918571b0224b2ea4e986f53;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" podUID="778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Mar 13 01:15:36.428427 master-0 kubenswrapper[7599]: E0313 01:15:36.428360 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:36.428427 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713" Netns:"/var/run/netns/d7fc7db4-4a65-49d7-901b-00fa1dc303ce" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.428427 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.428427 master-0 kubenswrapper[7599]: > Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: E0313 01:15:36.428458 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713" Netns:"/var/run/netns/d7fc7db4-4a65-49d7-901b-00fa1dc303ce" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: E0313 01:15:36.428495 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713" Netns:"/var/run/netns/d7fc7db4-4a65-49d7-901b-00fa1dc303ce" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:15:36.429102 master-0 kubenswrapper[7599]: E0313 01:15:36.428632 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(7106c6fe-7c8d-45b9-bc5c-521db743663f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(7106c6fe-7c8d-45b9-bc5c-521db743663f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713\\\" Netns:\\\"/var/run/netns/d7fc7db4-4a65-49d7-901b-00fa1dc303ce\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=530072c994651a6f31a347faa9a05a65c7627099c4bc8102b2927c9a6931b713;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" Mar 13 01:15:36.432798 master-0 kubenswrapper[7599]: E0313 01:15:36.432758 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:15:36.432798 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca" Netns:"/var/run/netns/a1714b6e-b3ee-457c-8ec1-6e50b376da90" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.432798 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.432798 master-0 kubenswrapper[7599]: > Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: E0313 01:15:36.432810 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca" Netns:"/var/run/netns/a1714b6e-b3ee-457c-8ec1-6e50b376da90" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: E0313 01:15:36.432832 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca" Netns:"/var/run/netns/a1714b6e-b3ee-457c-8ec1-6e50b376da90" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:15:36.432942 master-0 kubenswrapper[7599]: E0313 01:15:36.432884 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator(65ef9aae-25a5-46c6-adf3-634f8f7a29bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator(65ef9aae-25a5-46c6-adf3-634f8f7a29bc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca\\\" Netns:\\\"/var/run/netns/a1714b6e-b3ee-457c-8ec1-6e50b376da90\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=a1e170277c34391fa3e08e5a57b18f5110bfc8ab756f2025dd303f8415498cca;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" podUID="65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Mar 13 01:15:37.026373 master-0 kubenswrapper[7599]: I0313 01:15:37.026293 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:15:37.026713 master-0 kubenswrapper[7599]: I0313 01:15:37.026425 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:15:37.026713 master-0 kubenswrapper[7599]: I0313 01:15:37.026441 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:15:37.027028 master-0 kubenswrapper[7599]: I0313 01:15:37.026992 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:15:37.027661 master-0 kubenswrapper[7599]: I0313 01:15:37.027609 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:15:37.029798 master-0 kubenswrapper[7599]: I0313 01:15:37.029752 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:15:37.267679 master-0 kubenswrapper[7599]: I0313 01:15:37.267556 7599 patch_prober.go:28] interesting pod/etcd-operator-5884b9cd56-8r87t container/etcd-operator namespace/openshift-etcd-operator: Liveness probe status=failure output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" start-of-body= Mar 13 01:15:37.268112 master-0 kubenswrapper[7599]: I0313 01:15:37.267971 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" podUID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerName="etcd-operator" probeResult="failure" output="Get \"https://10.128.0.7:8443/healthz\": dial tcp 10.128.0.7:8443: connect: connection refused" Mar 13 01:15:37.838244 master-0 kubenswrapper[7599]: I0313 01:15:37.838134 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:15:37.839149 master-0 kubenswrapper[7599]: I0313 01:15:37.838264 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:15:37.839149 master-0 kubenswrapper[7599]: I0313 01:15:37.838375 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:15:37.839149 master-0 kubenswrapper[7599]: I0313 01:15:37.838436 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:15:39.048293 master-0 kubenswrapper[7599]: I0313 01:15:39.048184 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/0.log" Mar 13 01:15:39.049413 master-0 kubenswrapper[7599]: I0313 01:15:39.048942 7599 generic.go:334] "Generic (PLEG): container finished" podID="81835d51-a414-440f-889b-690561e98d6a" containerID="e9eb86bc8639ac87892dc75bde4aa22bd6e683c301d4d69ac50acf0d02a2db39" exitCode=1 Mar 13 01:15:39.051564 master-0 kubenswrapper[7599]: I0313 01:15:39.051458 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/0.log" Mar 13 01:15:39.051712 master-0 kubenswrapper[7599]: I0313 01:15:39.051584 7599 generic.go:334] "Generic (PLEG): container finished" podID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" containerID="743c555e1cf0c98c73695ed678affcb2226d9582a12dd77e2de535512f78c66d" exitCode=1 Mar 13 01:15:39.055152 master-0 kubenswrapper[7599]: I0313 01:15:39.055082 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-n4252_07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/manager/0.log" Mar 13 01:15:39.055152 master-0 kubenswrapper[7599]: I0313 01:15:39.055135 7599 generic.go:334] "Generic (PLEG): container finished" podID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerID="fd379745af9da3dead649206438373348f4ca6dba57dff1deac4d0df35fc6fc1" exitCode=1 Mar 13 01:15:41.043360 master-0 kubenswrapper[7599]: E0313 01:15:41.043267 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:15:41.044309 master-0 kubenswrapper[7599]: E0313 01:15:41.043573 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.016s" Mar 13 01:15:41.052706 master-0 kubenswrapper[7599]: I0313 01:15:41.052601 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 01:15:44.102993 master-0 kubenswrapper[7599]: E0313 01:15:44.102608 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-operators-t88cc.189c4189b4c3be36 openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-operators-t88cc,UID:c6382e2a-ec14-4457-8f26-3087b19d1e1a,APIVersion:v1,ResourceVersion:7028,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-operator-index:v4.18\" in 30.815s (30.815s including waiting). Image size: 1739173859 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:13:32.976184886 +0000 UTC m=+72.247864290,LastTimestamp:2026-03-13 01:13:32.976184886 +0000 UTC m=+72.247864290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:15:45.682214 master-0 kubenswrapper[7599]: E0313 01:15:45.681921 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:15:35Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:15:35Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:15:35Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:15:35Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3f3a3fc0144fd075212160b467722ab529c42c226d7e87d397f821c8e7df8628\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ec7e570be8cf0476a38d4db98b0455d5b94538b5b7b2ddb3b7d8f12c724c6ddb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284752601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:50fe533376cf6d45ae7e343c58d9c480fb1bc96859ffbbdc51ce2c428de2b653\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd2f5e111c85cdeeff92a61f881c260de30a26d2d9938eef43024e637422abaa\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c54c3f7cffe057ae0bdf26163d5e46744685083ae16fc97112e32beacd2d8955\\\"],\\\"sizeBytes\\\":484175664},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9b8bc43bac294be3c7669cde049e388ad9d8751242051ba40f83e1c401eceda\\\"],\\\"sizeBytes\\\":468263999},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ebee49810f493f9b566740bd61256fd40b897cc51423f1efa01a02bb57ce177d\\\"],\\\"sizeBytes\\\":467234714},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8677f7a973553c25d282bc249fc8bc0f5aa42fb144ea0956d1f04c5a6cd80501\\\"],\\\"sizeBytes\\\":465086330},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a85dab5856916220df6f05ce9d6aa10cd4fa0234093b55355246690bba05ad1\\\"],\\\"sizeBytes\\\":463700811},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b714a7ada1e295b599b432f32e1fd5b74c8cdbe6fe51e95306322b25cb873914\\\"],\\\"sizeBytes\\\":458126424},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5230462066ab36e3025524e948dd33fa6f51ee29a4f91fa469bfc268568b5fd9\\\"],\\\"sizeBytes\\\":456575686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:89cb093f319eaa04acfe9431b8697bffbc71ab670546f7ed257daa332165c626\\\"],\\\"sizeBytes\\\":448828105},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c680fcc9fd6b66099ca4c0f512521b6f8e0bc29273ddb9405730bc54bacb6783\\\"],\\\"sizeBytes\\\":448041621}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:15:46.761744 master-0 kubenswrapper[7599]: I0313 01:15:46.761642 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:15:46.761744 master-0 kubenswrapper[7599]: I0313 01:15:46.761722 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:15:46.939010 master-0 kubenswrapper[7599]: I0313 01:15:46.938898 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:15:46.939292 master-0 kubenswrapper[7599]: I0313 01:15:46.939017 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:15:47.688683 master-0 kubenswrapper[7599]: E0313 01:15:47.688550 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:15:47.838317 master-0 kubenswrapper[7599]: I0313 01:15:47.838207 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:15:47.839661 master-0 kubenswrapper[7599]: I0313 01:15:47.838325 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:15:47.839661 master-0 kubenswrapper[7599]: I0313 01:15:47.838461 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:15:47.839661 master-0 kubenswrapper[7599]: I0313 01:15:47.838853 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:15:54.344629 master-0 kubenswrapper[7599]: I0313 01:15:54.344457 7599 generic.go:334] "Generic (PLEG): container finished" podID="8c377a67-e763-4925-afae-a7f8546a369b" containerID="7e4809732e6f42f6e1aaeab2220c5d3d3098fc28ea26ac8cc73446ea1b10cd93" exitCode=0 Mar 13 01:15:55.683541 master-0 kubenswrapper[7599]: E0313 01:15:55.683360 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:15:56.763189 master-0 kubenswrapper[7599]: I0313 01:15:56.763055 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:15:56.763189 master-0 kubenswrapper[7599]: I0313 01:15:56.763160 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:15:56.763189 master-0 kubenswrapper[7599]: I0313 01:15:56.763194 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:15:56.764550 master-0 kubenswrapper[7599]: I0313 01:15:56.763271 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:15:56.938732 master-0 kubenswrapper[7599]: I0313 01:15:56.938626 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:15:56.938732 master-0 kubenswrapper[7599]: I0313 01:15:56.938625 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:15:56.939380 master-0 kubenswrapper[7599]: I0313 01:15:56.938736 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:15:56.939380 master-0 kubenswrapper[7599]: I0313 01:15:56.938793 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:15:57.838174 master-0 kubenswrapper[7599]: I0313 01:15:57.838115 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:15:57.838174 master-0 kubenswrapper[7599]: I0313 01:15:57.838152 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:15:57.838174 master-0 kubenswrapper[7599]: I0313 01:15:57.838191 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:15:57.838962 master-0 kubenswrapper[7599]: I0313 01:15:57.838201 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:16:04.416850 master-0 kubenswrapper[7599]: I0313 01:16:04.416734 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/2.log" Mar 13 01:16:04.418227 master-0 kubenswrapper[7599]: I0313 01:16:04.417865 7599 generic.go:334] "Generic (PLEG): container finished" podID="b5757329-8692-4719-b3c7-b5df78110fcf" containerID="25381ad36be0f85f98a8e3ecc8a5f4186dffd21de460ff1a56fc27b43bbb1f04" exitCode=255 Mar 13 01:16:04.690792 master-0 kubenswrapper[7599]: E0313 01:16:04.690470 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:16:05.684125 master-0 kubenswrapper[7599]: E0313 01:16:05.683997 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:16:06.761710 master-0 kubenswrapper[7599]: I0313 01:16:06.761617 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:16:06.761710 master-0 kubenswrapper[7599]: I0313 01:16:06.761699 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:16:06.938449 master-0 kubenswrapper[7599]: I0313 01:16:06.938332 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:16:06.938842 master-0 kubenswrapper[7599]: I0313 01:16:06.938456 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:16:07.838662 master-0 kubenswrapper[7599]: I0313 01:16:07.838579 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:16:07.838662 master-0 kubenswrapper[7599]: I0313 01:16:07.838656 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:16:15.056170 master-0 kubenswrapper[7599]: E0313 01:16:15.056073 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0-master-0" Mar 13 01:16:15.057097 master-0 kubenswrapper[7599]: E0313 01:16:15.056372 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="34.013s" Mar 13 01:16:15.057097 master-0 kubenswrapper[7599]: I0313 01:16:15.056409 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"dc0cc2d6bf9be0a194a0217c205d2ab79cbfb7d5acd7c9e8902600ce17ed4649"} Mar 13 01:16:15.067299 master-0 kubenswrapper[7599]: I0313 01:16:15.067238 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 01:16:15.685247 master-0 kubenswrapper[7599]: E0313 01:16:15.685138 7599 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:16:16.762794 master-0 kubenswrapper[7599]: I0313 01:16:16.762660 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:16:16.763697 master-0 kubenswrapper[7599]: I0313 01:16:16.762774 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:16:16.763697 master-0 kubenswrapper[7599]: I0313 01:16:16.762905 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:16:16.763697 master-0 kubenswrapper[7599]: I0313 01:16:16.762794 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:16:16.939027 master-0 kubenswrapper[7599]: I0313 01:16:16.938911 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:16:16.939027 master-0 kubenswrapper[7599]: I0313 01:16:16.938986 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:16:16.939449 master-0 kubenswrapper[7599]: I0313 01:16:16.939167 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:16:16.939449 master-0 kubenswrapper[7599]: I0313 01:16:16.939291 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:16:17.837649 master-0 kubenswrapper[7599]: I0313 01:16:17.837577 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:16:17.838186 master-0 kubenswrapper[7599]: I0313 01:16:17.837668 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:16:18.107306 master-0 kubenswrapper[7599]: E0313 01:16:18.106975 7599 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{redhat-marketplace-7mqtr.189c4189b75219bb openshift-marketplace 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-marketplace,Name:redhat-marketplace-7mqtr,UID:9992615a-c49b-4ef0-b02b-c6cd2e719fa3,APIVersion:v1,ResourceVersion:6927,FieldPath:spec.initContainers{extract-content},},Reason:Pulled,Message:Successfully pulled image \"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\" in 30.882s (30.882s including waiting). Image size: 1231028434 bytes.,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:13:33.019068859 +0000 UTC m=+72.290748253,LastTimestamp:2026-03-13 01:13:33.019068859 +0000 UTC m=+72.290748253,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:16:21.691808 master-0 kubenswrapper[7599]: E0313 01:16:21.691702 7599 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:16:26.762306 master-0 kubenswrapper[7599]: I0313 01:16:26.762217 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:16:26.762306 master-0 kubenswrapper[7599]: I0313 01:16:26.762292 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:16:26.938735 master-0 kubenswrapper[7599]: I0313 01:16:26.938655 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:16:26.938974 master-0 kubenswrapper[7599]: I0313 01:16:26.938760 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:16:27.838671 master-0 kubenswrapper[7599]: I0313 01:16:27.838500 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:16:27.839571 master-0 kubenswrapper[7599]: I0313 01:16:27.838696 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:16:28.065687 master-0 kubenswrapper[7599]: E0313 01:16:28.065603 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Internal error occurred: admission plugin \"LimitRanger\" failed to complete mutation in 13s" pod="openshift-etcd/etcd-master-0" Mar 13 01:16:31.850482 master-0 kubenswrapper[7599]: I0313 01:16:31.850395 7599 status_manager.go:851] "Failed to get status for pod" podUID="a1a56802af72ce1aac6b5077f1695ac0" pod="kube-system/bootstrap-kube-scheduler-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods bootstrap-kube-scheduler-master-0)" Mar 13 01:16:35.923485 master-0 kubenswrapper[7599]: E0313 01:16:35.923403 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:35.923485 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b" Netns:"/var/run/netns/13228641-56ab-4be9-abc8-f485c5064b96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:35.923485 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:35.923485 master-0 kubenswrapper[7599]: > Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: E0313 01:16:35.923557 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b" Netns:"/var/run/netns/13228641-56ab-4be9-abc8-f485c5064b96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: > pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: E0313 01:16:35.923601 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b" Netns:"/var/run/netns/13228641-56ab-4be9-abc8-f485c5064b96" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8" Path:"" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: > pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:16:35.924360 master-0 kubenswrapper[7599]: E0313 01:16:35.923732 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-1-master-0_openshift-kube-apiserver(fdcd8438-d33f-490f-a841-8944c58506f8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-1-master-0_openshift-kube-apiserver(fdcd8438-d33f-490f-a841-8944c58506f8)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-1-master-0_openshift-kube-apiserver_fdcd8438-d33f-490f-a841-8944c58506f8_0(5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b): error adding pod openshift-kube-apiserver_installer-1-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b\\\" Netns:\\\"/var/run/netns/13228641-56ab-4be9-abc8-f485c5064b96\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-apiserver;K8S_POD_NAME=installer-1-master-0;K8S_POD_INFRA_CONTAINER_ID=5e2fb2d7f60ce388b1bffd3a8089b8eddad106bcea2ad85e8a5319669326376b;K8S_POD_UID=fdcd8438-d33f-490f-a841-8944c58506f8\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-apiserver/installer-1-master-0] networking: Multus: [openshift-kube-apiserver/installer-1-master-0/fdcd8438-d33f-490f-a841-8944c58506f8]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-1-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-1-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-apiserver/installer-1-master-0" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" Mar 13 01:16:36.776016 master-0 kubenswrapper[7599]: I0313 01:16:36.775927 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Readiness probe status=failure output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:16:36.776016 master-0 kubenswrapper[7599]: I0313 01:16:36.775997 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/readyz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:16:36.776313 master-0 kubenswrapper[7599]: I0313 01:16:36.776029 7599 patch_prober.go:28] interesting pod/operator-controller-controller-manager-6598bfb6c4-n4252 container/manager namespace/openshift-operator-controller: Liveness probe status=failure output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" start-of-body= Mar 13 01:16:36.776313 master-0 kubenswrapper[7599]: I0313 01:16:36.776103 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" podUID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.45:8081/healthz\": dial tcp 10.128.0.45:8081: connect: connection refused" Mar 13 01:16:36.855939 master-0 kubenswrapper[7599]: E0313 01:16:36.855876 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:36.855939 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca" Netns:"/var/run/netns/16afe681-d77e-4ad5-b7fd-3fe00136717d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:36.855939 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:36.855939 master-0 kubenswrapper[7599]: > Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: E0313 01:16:36.855971 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca" Netns:"/var/run/netns/16afe681-d77e-4ad5-b7fd-3fe00136717d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: > pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: E0313 01:16:36.856000 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca" Netns:"/var/run/netns/16afe681-d77e-4ad5-b7fd-3fe00136717d" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Path:"" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: > pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:16:36.856193 master-0 kubenswrapper[7599]: E0313 01:16:36.856087 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-4-master-0_openshift-kube-scheduler(7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-4-master-0_openshift-kube-scheduler(7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-4-master-0_openshift-kube-scheduler_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90_0(f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca): error adding pod openshift-kube-scheduler_installer-4-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca\\\" Netns:\\\"/var/run/netns/16afe681-d77e-4ad5-b7fd-3fe00136717d\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-scheduler;K8S_POD_NAME=installer-4-master-0;K8S_POD_INFRA_CONTAINER_ID=f335802a7c5ebe9520d36733cb62cd736e51c0ee30929d37b28cd369897daaca;K8S_POD_UID=7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-scheduler/installer-4-master-0] networking: Multus: [openshift-kube-scheduler/installer-4-master-0/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-4-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-4-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-scheduler/installer-4-master-0" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" Mar 13 01:16:36.927958 master-0 kubenswrapper[7599]: E0313 01:16:36.927891 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:36.927958 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1" Netns:"/var/run/netns/22d39d2f-0918-45d3-b5eb-56eaafc37dc8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:36.927958 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:36.927958 master-0 kubenswrapper[7599]: > Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: E0313 01:16:36.927967 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1" Netns:"/var/run/netns/22d39d2f-0918-45d3-b5eb-56eaafc37dc8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: E0313 01:16:36.927995 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1" Netns:"/var/run/netns/22d39d2f-0918-45d3-b5eb-56eaafc37dc8" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Path:"" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: > pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:16:36.928486 master-0 kubenswrapper[7599]: E0313 01:16:36.928094 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator(65dd1dc7-1b90-40f6-82c9-dee90a1fa852)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator(65dd1dc7-1b90-40f6-82c9-dee90a1fa852)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cloud-credential-operator-55d85b7b47-b4w7s_openshift-cloud-credential-operator_65dd1dc7-1b90-40f6-82c9-dee90a1fa852_0(1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1): error adding pod openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1\\\" Netns:\\\"/var/run/netns/22d39d2f-0918-45d3-b5eb-56eaafc37dc8\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cloud-credential-operator;K8S_POD_NAME=cloud-credential-operator-55d85b7b47-b4w7s;K8S_POD_INFRA_CONTAINER_ID=1fa54638055edc897bec5aa2863dc3fe8c7d9a6f6d147976c311088b7ab280d1;K8S_POD_UID=65dd1dc7-1b90-40f6-82c9-dee90a1fa852\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s] networking: Multus: [openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s/65dd1dc7-1b90-40f6-82c9-dee90a1fa852]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: SetNetworkStatus: failed to update the pod cloud-credential-operator-55d85b7b47-b4w7s in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-credential-operator/pods/cloud-credential-operator-55d85b7b47-b4w7s?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" podUID="65dd1dc7-1b90-40f6-82c9-dee90a1fa852" Mar 13 01:16:36.938960 master-0 kubenswrapper[7599]: I0313 01:16:36.938912 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Liveness probe status=failure output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:16:36.939281 master-0 kubenswrapper[7599]: I0313 01:16:36.939246 7599 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/healthz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:16:36.939403 master-0 kubenswrapper[7599]: I0313 01:16:36.938912 7599 patch_prober.go:28] interesting pod/catalogd-controller-manager-7f8b8b6f4c-z4qvz container/manager namespace/openshift-catalogd: Readiness probe status=failure output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" start-of-body= Mar 13 01:16:36.939498 master-0 kubenswrapper[7599]: I0313 01:16:36.939480 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" podUID="81835d51-a414-440f-889b-690561e98d6a" containerName="manager" probeResult="failure" output="Get \"http://10.128.0.44:8081/readyz\": dial tcp 10.128.0.44:8081: connect: connection refused" Mar 13 01:16:37.174964 master-0 kubenswrapper[7599]: E0313 01:16:37.174884 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:37.174964 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54" Netns:"/var/run/netns/68f5c10f-9b9b-44ba-a4b7-c036b4537d28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.174964 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.174964 master-0 kubenswrapper[7599]: > Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: E0313 01:16:37.174994 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54" Netns:"/var/run/netns/68f5c10f-9b9b-44ba-a4b7-c036b4537d28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: > pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: E0313 01:16:37.175031 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54" Netns:"/var/run/netns/68f5c10f-9b9b-44ba-a4b7-c036b4537d28" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928" Path:"" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: > pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:16:37.175260 master-0 kubenswrapper[7599]: E0313 01:16:37.175126 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"insights-operator-8f89dfddd-hn4jh_openshift-insights(6e799871-735a-44e8-8193-24c5bb388928)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"insights-operator-8f89dfddd-hn4jh_openshift-insights(6e799871-735a-44e8-8193-24c5bb388928)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_insights-operator-8f89dfddd-hn4jh_openshift-insights_6e799871-735a-44e8-8193-24c5bb388928_0(73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54): error adding pod openshift-insights_insights-operator-8f89dfddd-hn4jh to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54\\\" Netns:\\\"/var/run/netns/68f5c10f-9b9b-44ba-a4b7-c036b4537d28\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-insights;K8S_POD_NAME=insights-operator-8f89dfddd-hn4jh;K8S_POD_INFRA_CONTAINER_ID=73180ba099af092021cabaef935d04bfd927c1b656fb0bd1dc3d4dc77384af54;K8S_POD_UID=6e799871-735a-44e8-8193-24c5bb388928\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-insights/insights-operator-8f89dfddd-hn4jh] networking: Multus: [openshift-insights/insights-operator-8f89dfddd-hn4jh/6e799871-735a-44e8-8193-24c5bb388928]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: SetNetworkStatus: failed to update the pod insights-operator-8f89dfddd-hn4jh in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-insights/pods/insights-operator-8f89dfddd-hn4jh?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" podUID="6e799871-735a-44e8-8193-24c5bb388928" Mar 13 01:16:37.182150 master-0 kubenswrapper[7599]: E0313 01:16:37.182093 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:37.182150 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa" Netns:"/var/run/netns/b388548a-d085-4b97-86ac-47d0b7b6a814" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s": context deadline exceeded Mar 13 01:16:37.182150 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.182150 master-0 kubenswrapper[7599]: > Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: E0313 01:16:37.182166 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa" Netns:"/var/run/netns/b388548a-d085-4b97-86ac-47d0b7b6a814" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s": context deadline exceeded Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: E0313 01:16:37.182202 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa" Netns:"/var/run/netns/b388548a-d085-4b97-86ac-47d0b7b6a814" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s": context deadline exceeded Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:16:37.182343 master-0 kubenswrapper[7599]: E0313 01:16:37.182290 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api_21110b48-25fc-434a-b156-7f6bd6064bed_0(4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa): error adding pod openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa\\\" Netns:\\\"/var/run/netns/b388548a-d085-4b97-86ac-47d0b7b6a814\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-baremetal-operator-5cdb4c5598-5dvnt;K8S_POD_INFRA_CONTAINER_ID=4f1bfec600bfd56e327312267c3b0e9acb8893e97eaf928b71a90920425e32fa;K8S_POD_UID=21110b48-25fc-434a-b156-7f6bd6064bed\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt] networking: Multus: [openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt/21110b48-25fc-434a-b156-7f6bd6064bed]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-baremetal-operator-5cdb4c5598-5dvnt in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-baremetal-operator-5cdb4c5598-5dvnt?timeout=1m0s\\\": context deadline exceeded\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" podUID="21110b48-25fc-434a-b156-7f6bd6064bed" Mar 13 01:16:37.187767 master-0 kubenswrapper[7599]: E0313 01:16:37.187706 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:37.187767 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6" Netns:"/var/run/netns/1c3977b1-3133-4801-a4b1-69cad39a8924" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.187767 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.187767 master-0 kubenswrapper[7599]: > Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: E0313 01:16:37.187783 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6" Netns:"/var/run/netns/1c3977b1-3133-4801-a4b1-69cad39a8924" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: E0313 01:16:37.187809 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6" Netns:"/var/run/netns/1c3977b1-3133-4801-a4b1-69cad39a8924" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232" Path:"" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:16:37.187970 master-0 kubenswrapper[7599]: E0313 01:16:37.187875 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api(2581e5b5-8cbb-4fa5-9888-98fb572a6232)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api(2581e5b5-8cbb-4fa5-9888-98fb572a6232)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-autoscaler-operator-69576476f7-lrmx9_openshift-machine-api_2581e5b5-8cbb-4fa5-9888-98fb572a6232_0(522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6): error adding pod openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6\\\" Netns:\\\"/var/run/netns/1c3977b1-3133-4801-a4b1-69cad39a8924\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=cluster-autoscaler-operator-69576476f7-lrmx9;K8S_POD_INFRA_CONTAINER_ID=522254cf190d2b4881e5c693560dc5c119fc80130304537f20c4b8b5d49501b6;K8S_POD_UID=2581e5b5-8cbb-4fa5-9888-98fb572a6232\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9] networking: Multus: [openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9/2581e5b5-8cbb-4fa5-9888-98fb572a6232]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-autoscaler-operator-69576476f7-lrmx9 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/cluster-autoscaler-operator-69576476f7-lrmx9?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" podUID="2581e5b5-8cbb-4fa5-9888-98fb572a6232" Mar 13 01:16:37.240842 master-0 kubenswrapper[7599]: E0313 01:16:37.240747 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:37.240842 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5" Netns:"/var/run/netns/39923cc7-1eef-40cd-82f5-30b02cbdb494" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.240842 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.240842 master-0 kubenswrapper[7599]: > Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: E0313 01:16:37.240857 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5" Netns:"/var/run/netns/39923cc7-1eef-40cd-82f5-30b02cbdb494" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: E0313 01:16:37.240894 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5" Netns:"/var/run/netns/39923cc7-1eef-40cd-82f5-30b02cbdb494" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f" Path:"" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: > pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:16:37.241181 master-0 kubenswrapper[7599]: E0313 01:16:37.241010 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api(56e20b21-ba17-46ae-a740-0e7bd45eae5f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api(56e20b21-ba17-46ae-a740-0e7bd45eae5f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_control-plane-machine-set-operator-6686554ddc-pmrq6_openshift-machine-api_56e20b21-ba17-46ae-a740-0e7bd45eae5f_0(7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5): error adding pod openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5\\\" Netns:\\\"/var/run/netns/39923cc7-1eef-40cd-82f5-30b02cbdb494\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-machine-api;K8S_POD_NAME=control-plane-machine-set-operator-6686554ddc-pmrq6;K8S_POD_INFRA_CONTAINER_ID=7fa376f9c4c74e15a38ccfd95184058d46d8dfe13624ab8a8f14fab6482daef5;K8S_POD_UID=56e20b21-ba17-46ae-a740-0e7bd45eae5f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6] networking: Multus: [openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6/56e20b21-ba17-46ae-a740-0e7bd45eae5f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: SetNetworkStatus: failed to update the pod control-plane-machine-set-operator-6686554ddc-pmrq6 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/pods/control-plane-machine-set-operator-6686554ddc-pmrq6?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" podUID="56e20b21-ba17-46ae-a740-0e7bd45eae5f" Mar 13 01:16:37.757890 master-0 kubenswrapper[7599]: E0313 01:16:37.757810 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:37.757890 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d" Netns:"/var/run/netns/f4a83246-4d7a-434e-a9d8-e9b2d2ab350f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.757890 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.757890 master-0 kubenswrapper[7599]: > Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: E0313 01:16:37.757933 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d" Netns:"/var/run/netns/f4a83246-4d7a-434e-a9d8-e9b2d2ab350f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: E0313 01:16:37.757972 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d" Netns:"/var/run/netns/f4a83246-4d7a-434e-a9d8-e9b2d2ab350f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f" Path:"" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.758117 master-0 kubenswrapper[7599]: > pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:16:37.758409 master-0 kubenswrapper[7599]: E0313 01:16:37.758096 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"installer-2-master-0_openshift-kube-controller-manager(7106c6fe-7c8d-45b9-bc5c-521db743663f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"installer-2-master-0_openshift-kube-controller-manager(7106c6fe-7c8d-45b9-bc5c-521db743663f)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_installer-2-master-0_openshift-kube-controller-manager_7106c6fe-7c8d-45b9-bc5c-521db743663f_0(aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d): error adding pod openshift-kube-controller-manager_installer-2-master-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d\\\" Netns:\\\"/var/run/netns/f4a83246-4d7a-434e-a9d8-e9b2d2ab350f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-kube-controller-manager;K8S_POD_NAME=installer-2-master-0;K8S_POD_INFRA_CONTAINER_ID=aacb9fc2966cb7250231b8d648e4f4deff99b8bbcf14f8556192099d8a9c862d;K8S_POD_UID=7106c6fe-7c8d-45b9-bc5c-521db743663f\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-kube-controller-manager/installer-2-master-0] networking: Multus: [openshift-kube-controller-manager/installer-2-master-0/7106c6fe-7c8d-45b9-bc5c-521db743663f]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod installer-2-master-0 in out of cluster comm: SetNetworkStatus: failed to update the pod installer-2-master-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-kube-controller-manager/installer-2-master-0" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" Mar 13 01:16:37.838643 master-0 kubenswrapper[7599]: I0313 01:16:37.838042 7599 patch_prober.go:28] interesting pod/marketplace-operator-64bf9778cb-bx29h container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" start-of-body= Mar 13 01:16:37.838643 master-0 kubenswrapper[7599]: I0313 01:16:37.838128 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" podUID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.128.0.24:8080/healthz\": dial tcp 10.128.0.24:8080: connect: connection refused" Mar 13 01:16:37.870707 master-0 kubenswrapper[7599]: E0313 01:16:37.870634 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:37.870707 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e" Netns:"/var/run/netns/0d624674-f1b5-4d47-bc07-1f450bd83b74" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.870707 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.870707 master-0 kubenswrapper[7599]: > Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: E0313 01:16:37.870750 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e" Netns:"/var/run/netns/0d624674-f1b5-4d47-bc07-1f450bd83b74" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: E0313 01:16:37.870804 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e" Netns:"/var/run/netns/0d624674-f1b5-4d47-bc07-1f450bd83b74" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Path:"" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.870913 master-0 kubenswrapper[7599]: > pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:16:37.871241 master-0 kubenswrapper[7599]: E0313 01:16:37.870925 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator(778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator(778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-samples-operator-664cb58b85-mcfmg_openshift-cluster-samples-operator_778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0_0(6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e): error adding pod openshift-cluster-samples-operator_cluster-samples-operator-664cb58b85-mcfmg to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e\\\" Netns:\\\"/var/run/netns/0d624674-f1b5-4d47-bc07-1f450bd83b74\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-samples-operator;K8S_POD_NAME=cluster-samples-operator-664cb58b85-mcfmg;K8S_POD_INFRA_CONTAINER_ID=6fd45f37e213a81e575255b18358d6994b211f950b01d556488735fed4b9dd3e;K8S_POD_UID=778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg] networking: Multus: [openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-samples-operator-664cb58b85-mcfmg in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-samples-operator/pods/cluster-samples-operator-664cb58b85-mcfmg?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" podUID="778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: E0313 01:16:37.978453 7599 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714" Netns:"/var/run/netns/0a18ce2f-6fa8-44cf-b4ef-d824b0ec2165" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: > Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: E0313 01:16:37.978622 7599 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714" Netns:"/var/run/netns/0a18ce2f-6fa8-44cf-b4ef-d824b0ec2165" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: E0313 01:16:37.978668 7599 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714" Netns:"/var/run/netns/0a18ce2f-6fa8-44cf-b4ef-d824b0ec2165" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Path:"" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:16:37.978655 master-0 kubenswrapper[7599]: > pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:16:37.979487 master-0 kubenswrapper[7599]: E0313 01:16:37.978794 7599 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator(65ef9aae-25a5-46c6-adf3-634f8f7a29bc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator(65ef9aae-25a5-46c6-adf3-634f8f7a29bc)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_cluster-storage-operator-6fbfc8dc8f-h9mwm_openshift-cluster-storage-operator_65ef9aae-25a5-46c6-adf3-634f8f7a29bc_0(0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714): error adding pod openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714\\\" Netns:\\\"/var/run/netns/0a18ce2f-6fa8-44cf-b4ef-d824b0ec2165\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-cluster-storage-operator;K8S_POD_NAME=cluster-storage-operator-6fbfc8dc8f-h9mwm;K8S_POD_INFRA_CONTAINER_ID=0ecf25d167555513026b13ff3a117c22e17f69e8443513ca34e5570df2cc2714;K8S_POD_UID=65ef9aae-25a5-46c6-adf3-634f8f7a29bc\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm] networking: Multus: [openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm/65ef9aae-25a5-46c6-adf3-634f8f7a29bc]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: SetNetworkStatus: failed to update the pod cluster-storage-operator-6fbfc8dc8f-h9mwm in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cluster-storage-operator/pods/cluster-storage-operator-6fbfc8dc8f-h9mwm?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" podUID="65ef9aae-25a5-46c6-adf3-634f8f7a29bc" Mar 13 01:16:45.334815 master-0 kubenswrapper[7599]: E0313 01:16:45.334727 7599 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="30.278s" Mar 13 01:16:45.342471 master-0 kubenswrapper[7599]: I0313 01:16:45.341708 7599 scope.go:117] "RemoveContainer" containerID="fd379745af9da3dead649206438373348f4ca6dba57dff1deac4d0df35fc6fc1" Mar 13 01:16:45.342865 master-0 kubenswrapper[7599]: I0313 01:16:45.342752 7599 scope.go:117] "RemoveContainer" containerID="25381ad36be0f85f98a8e3ecc8a5f4186dffd21de460ff1a56fc27b43bbb1f04" Mar 13 01:16:45.344272 master-0 kubenswrapper[7599]: I0313 01:16:45.344137 7599 scope.go:117] "RemoveContainer" containerID="aa8d570cc916b085b102875f5c8076691d32fc0570491e0ffdf16bc87e8e94b9" Mar 13 01:16:45.344741 master-0 kubenswrapper[7599]: I0313 01:16:45.344341 7599 scope.go:117] "RemoveContainer" containerID="743c555e1cf0c98c73695ed678affcb2226d9582a12dd77e2de535512f78c66d" Mar 13 01:16:45.345838 master-0 kubenswrapper[7599]: I0313 01:16:45.345406 7599 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="etcd-operator" containerStatusID={"Type":"cri-o","ID":"dcf6d152312d68c0bbfc80742b97a5a67fe4e1a416cc5f56000de592b4daaaa8"} pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" containerMessage="Container etcd-operator failed liveness probe, will be restarted" Mar 13 01:16:45.345838 master-0 kubenswrapper[7599]: I0313 01:16:45.345469 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" podUID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerName="etcd-operator" containerID="cri-o://dcf6d152312d68c0bbfc80742b97a5a67fe4e1a416cc5f56000de592b4daaaa8" gracePeriod=30 Mar 13 01:16:45.346232 master-0 kubenswrapper[7599]: I0313 01:16:45.346114 7599 scope.go:117] "RemoveContainer" containerID="2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0" Mar 13 01:16:45.354489 master-0 kubenswrapper[7599]: I0313 01:16:45.354368 7599 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0-master-0" podUID="" Mar 13 01:16:45.355046 master-0 kubenswrapper[7599]: I0313 01:16:45.354994 7599 scope.go:117] "RemoveContainer" containerID="e9eb86bc8639ac87892dc75bde4aa22bd6e683c301d4d69ac50acf0d02a2db39" Mar 13 01:16:45.355488 master-0 kubenswrapper[7599]: I0313 01:16:45.355428 7599 scope.go:117] "RemoveContainer" containerID="9c0bd715b837c01a89df34dba5a1abd4f477608efb9ac5a6df89d6b122c0876b" Mar 13 01:16:45.355914 master-0 kubenswrapper[7599]: I0313 01:16:45.355839 7599 scope.go:117] "RemoveContainer" containerID="951aa4d6803ad0268be9d58f3b51ebac5555d4f85866ee29a2837692062094ee" Mar 13 01:16:45.357480 master-0 kubenswrapper[7599]: I0313 01:16:45.357427 7599 scope.go:117] "RemoveContainer" containerID="7e4809732e6f42f6e1aaeab2220c5d3d3098fc28ea26ac8cc73446ea1b10cd93" Mar 13 01:16:45.358631 master-0 kubenswrapper[7599]: I0313 01:16:45.358420 7599 scope.go:117] "RemoveContainer" containerID="94468d369b5f43adf08abc9d6a6230238254bef0eb81d4e6a3d5e925f29bcc13" Mar 13 01:16:45.362006 master-0 kubenswrapper[7599]: I0313 01:16:45.361936 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:16:45.362006 master-0 kubenswrapper[7599]: I0313 01:16:45.361987 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 01:16:45.362216 master-0 kubenswrapper[7599]: I0313 01:16:45.362023 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:16:45.362216 master-0 kubenswrapper[7599]: I0313 01:16:45.362043 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:16:45.362216 master-0 kubenswrapper[7599]: I0313 01:16:45.362063 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" event={"ID":"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea","Type":"ContainerDied","Data":"db75a500d25df1d35034bc9e7d835e3af06e992e3af2605476ce0e45095ba6b9"} Mar 13 01:16:45.362398 master-0 kubenswrapper[7599]: I0313 01:16:45.362246 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 01:16:45.362398 master-0 kubenswrapper[7599]: I0313 01:16:45.362274 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-mcps9" event={"ID":"c687237e-50e5-405d-8fef-0efbc3866630","Type":"ContainerDied","Data":"826ddf0fad5a47b74a9e97796304f54274bf436e1dab02b9917102d0ced785b8"} Mar 13 01:16:45.362398 master-0 kubenswrapper[7599]: I0313 01:16:45.362297 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:16:45.362398 master-0 kubenswrapper[7599]: I0313 01:16:45.362315 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" event={"ID":"fbfc2caf-126e-41b9-9b31-05f7a45d8536","Type":"ContainerDied","Data":"5436fbc43037209189594bd015e39350294b9b8da6b6096cb145d36bfb03543f"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362435 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362470 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362483 7599 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="8a5b69a5-a9ef-4983-9c5b-420fdafc1794" Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362498 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" event={"ID":"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc","Type":"ContainerDied","Data":"7f4c53a355951175886abfb80eb4256c32b51f0ad7d9c970345c8e4c70d93ccb"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362540 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362560 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0-master-0"] Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362569 7599 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-etcd/etcd-master-0-master-0" mirrorPodUID="8a5b69a5-a9ef-4983-9c5b-420fdafc1794" Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362578 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362589 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-1-master-0"] Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362603 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" event={"ID":"b5757329-8692-4719-b3c7-b5df78110fcf","Type":"ContainerDied","Data":"9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362619 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362631 7599 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362641 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" event={"ID":"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b","Type":"ContainerDied","Data":"b30ae4d37e850868384d04498318b52f585a63274ae43d082fa8cb4389cea8b3"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362660 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" event={"ID":"96b67a99-eada-44d7-93eb-cc3ced777fc6","Type":"ContainerDied","Data":"cc1038b189ab36843989b837c930bbf20934f08cf043e09fd788646b7d078f2a"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362675 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" event={"ID":"fde89b0b-7133-4b97-9e35-51c0382bd366","Type":"ContainerDied","Data":"aa8d570cc916b085b102875f5c8076691d32fc0570491e0ffdf16bc87e8e94b9"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362692 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362707 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362708 7599 scope.go:117] "RemoveContainer" containerID="9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d" Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362722 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" event={"ID":"b5757329-8692-4719-b3c7-b5df78110fcf","Type":"ContainerStarted","Data":"25381ad36be0f85f98a8e3ecc8a5f4186dffd21de460ff1a56fc27b43bbb1f04"} Mar 13 01:16:45.362716 master-0 kubenswrapper[7599]: I0313 01:16:45.362745 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" event={"ID":"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b","Type":"ContainerStarted","Data":"a7c779880e0c80a371f65863e31c95a1c133497da0e04f38f03e862dffa279aa"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362764 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" event={"ID":"c6db75e5-efd1-4bfa-9941-0934d7621ba2","Type":"ContainerStarted","Data":"769c129b7e29d4929952316ce6f7641c3c7ac9955f6a84df03be0a0cf43a0023"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362785 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-mcps9" event={"ID":"c687237e-50e5-405d-8fef-0efbc3866630","Type":"ContainerStarted","Data":"b59c177e34d0deb037bbfb6fe7cd23b008e03a59c7d82a89ffa611ae562dbeb4"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362804 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerStarted","Data":"dcf6d152312d68c0bbfc80742b97a5a67fe4e1a416cc5f56000de592b4daaaa8"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362829 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" event={"ID":"fbfc2caf-126e-41b9-9b31-05f7a45d8536","Type":"ContainerStarted","Data":"f3648127120432d42351630482fc5ec1314543a47769068b1c6a7ef537aa3e64"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362846 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" event={"ID":"96b67a99-eada-44d7-93eb-cc3ced777fc6","Type":"ContainerStarted","Data":"58a0c19a92e7e6a597f521a8d041a767f54e7dcfa1a0f617211a394c980fae45"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362863 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" event={"ID":"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea","Type":"ContainerStarted","Data":"5df12607056f6e5a516d0c19db2c5e705f703a62b440faa02679e80aac8df03b"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362884 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" event={"ID":"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc","Type":"ContainerStarted","Data":"e788ac818d36552e959d3ce3c24eadc18eee4e1a4d848c9a845c316485d6e935"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362896 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" event={"ID":"75a53c09-210a-4346-99b0-a632b9e0a3c9","Type":"ContainerDied","Data":"951aa4d6803ad0268be9d58f3b51ebac5555d4f85866ee29a2837692062094ee"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.362913 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" event={"ID":"74efa52b-fd97-418a-9a44-914442633f74","Type":"ContainerDied","Data":"9c0bd715b837c01a89df34dba5a1abd4f477608efb9ac5a6df89d6b122c0876b"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363002 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"3b4b0099ff3715076e4da8c307cf4cdf19113ad975d741008a026d470fd6e8de"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363017 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"59b81ddf96703b46c61723679f4eccced325378be4bf3ce47532a5cf8c25aff1"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363028 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"11afe1e82df06ef58f2b34ee7f14cab6582b1c3ebb23e73f966071d3f60bb7d3"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363050 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"bf41e0708018a7a42a9ea985f7ec3256a3866f84520062060092284abe939c72"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363061 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"d4307a8d99b06baad18f959ac230bad4c2bf7ab603532b53714a7efb8d542993"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363072 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" event={"ID":"8ad2a6d5-6edf-4840-89f9-47847c8dac05","Type":"ContainerDied","Data":"94468d369b5f43adf08abc9d6a6230238254bef0eb81d4e6a3d5e925f29bcc13"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363087 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerDied","Data":"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363098 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" event={"ID":"81835d51-a414-440f-889b-690561e98d6a","Type":"ContainerDied","Data":"e9eb86bc8639ac87892dc75bde4aa22bd6e683c301d4d69ac50acf0d02a2db39"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363112 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerDied","Data":"743c555e1cf0c98c73695ed678affcb2226d9582a12dd77e2de535512f78c66d"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363125 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" event={"ID":"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e","Type":"ContainerDied","Data":"fd379745af9da3dead649206438373348f4ca6dba57dff1deac4d0df35fc6fc1"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363137 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" event={"ID":"8c377a67-e763-4925-afae-a7f8546a369b","Type":"ContainerDied","Data":"7e4809732e6f42f6e1aaeab2220c5d3d3098fc28ea26ac8cc73446ea1b10cd93"} Mar 13 01:16:45.363924 master-0 kubenswrapper[7599]: I0313 01:16:45.363150 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" event={"ID":"b5757329-8692-4719-b3c7-b5df78110fcf","Type":"ContainerDied","Data":"25381ad36be0f85f98a8e3ecc8a5f4186dffd21de460ff1a56fc27b43bbb1f04"} Mar 13 01:16:45.387546 master-0 kubenswrapper[7599]: I0313 01:16:45.387457 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 01:16:45.392729 master-0 kubenswrapper[7599]: I0313 01:16:45.392265 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 01:16:45.423648 master-0 kubenswrapper[7599]: I0313 01:16:45.423548 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 01:16:45.430372 master-0 kubenswrapper[7599]: I0313 01:16:45.430308 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/installer-1-master-0"] Mar 13 01:16:45.457489 master-0 kubenswrapper[7599]: I0313 01:16:45.457386 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-t88cc" podStartSLOduration=180.101330715 podStartE2EDuration="3m45.457357261s" podCreationTimestamp="2026-03-13 01:13:00 +0000 UTC" firstStartedPulling="2026-03-13 01:13:02.160428456 +0000 UTC m=+41.432107850" lastFinishedPulling="2026-03-13 01:13:47.516455002 +0000 UTC m=+86.788134396" observedRunningTime="2026-03-13 01:16:45.457161246 +0000 UTC m=+264.728840670" watchObservedRunningTime="2026-03-13 01:16:45.457357261 +0000 UTC m=+264.729036685" Mar 13 01:16:45.522296 master-0 kubenswrapper[7599]: I0313 01:16:45.514543 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7"] Mar 13 01:16:45.522296 master-0 kubenswrapper[7599]: I0313 01:16:45.518262 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-748966cb9f-wnsx7"] Mar 13 01:16:45.538632 master-0 kubenswrapper[7599]: I0313 01:16:45.538591 7599 scope.go:117] "RemoveContainer" containerID="6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173" Mar 13 01:16:45.572221 master-0 kubenswrapper[7599]: I0313 01:16:45.572148 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7mqtr" podStartSLOduration=181.361358154 podStartE2EDuration="3m46.572113933s" podCreationTimestamp="2026-03-13 01:12:59 +0000 UTC" firstStartedPulling="2026-03-13 01:13:02.136355828 +0000 UTC m=+41.408035222" lastFinishedPulling="2026-03-13 01:13:47.347111567 +0000 UTC m=+86.618791001" observedRunningTime="2026-03-13 01:16:45.571007006 +0000 UTC m=+264.842686400" watchObservedRunningTime="2026-03-13 01:16:45.572113933 +0000 UTC m=+264.843793327" Mar 13 01:16:45.601335 master-0 kubenswrapper[7599]: I0313 01:16:45.601287 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8"] Mar 13 01:16:45.619100 master-0 kubenswrapper[7599]: I0313 01:16:45.619040 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-6d46b9fb7-t9sp8"] Mar 13 01:16:45.647587 master-0 kubenswrapper[7599]: I0313 01:16:45.647524 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-jzlpt" podStartSLOduration=180.335223873 podStartE2EDuration="3m47.647488367s" podCreationTimestamp="2026-03-13 01:12:58 +0000 UTC" firstStartedPulling="2026-03-13 01:13:00.057087786 +0000 UTC m=+39.328767180" lastFinishedPulling="2026-03-13 01:13:47.36935223 +0000 UTC m=+86.641031674" observedRunningTime="2026-03-13 01:16:45.646057132 +0000 UTC m=+264.917736546" watchObservedRunningTime="2026-03-13 01:16:45.647488367 +0000 UTC m=+264.919167761" Mar 13 01:16:45.670624 master-0 kubenswrapper[7599]: I0313 01:16:45.670461 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-xnmjr" podStartSLOduration=180.413067113 podStartE2EDuration="3m44.669801344s" podCreationTimestamp="2026-03-13 01:13:01 +0000 UTC" firstStartedPulling="2026-03-13 01:13:03.174587809 +0000 UTC m=+42.446267203" lastFinishedPulling="2026-03-13 01:13:47.43132201 +0000 UTC m=+86.703001434" observedRunningTime="2026-03-13 01:16:45.669279181 +0000 UTC m=+264.940958575" watchObservedRunningTime="2026-03-13 01:16:45.669801344 +0000 UTC m=+264.941480738" Mar 13 01:16:45.681796 master-0 kubenswrapper[7599]: I0313 01:16:45.681726 7599 scope.go:117] "RemoveContainer" containerID="22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff" Mar 13 01:16:45.705400 master-0 kubenswrapper[7599]: I0313 01:16:45.705254 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/2.log" Mar 13 01:16:45.709997 master-0 kubenswrapper[7599]: I0313 01:16:45.708779 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/0.log" Mar 13 01:16:45.711611 master-0 kubenswrapper[7599]: I0313 01:16:45.711574 7599 generic.go:334] "Generic (PLEG): container finished" podID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerID="dcf6d152312d68c0bbfc80742b97a5a67fe4e1a416cc5f56000de592b4daaaa8" exitCode=0 Mar 13 01:16:45.711677 master-0 kubenswrapper[7599]: I0313 01:16:45.711648 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerDied","Data":"dcf6d152312d68c0bbfc80742b97a5a67fe4e1a416cc5f56000de592b4daaaa8"} Mar 13 01:16:45.749825 master-0 kubenswrapper[7599]: E0313 01:16:45.746597 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 01:16:45.762844 master-0 kubenswrapper[7599]: I0313 01:16:45.761985 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.761956281 podStartE2EDuration="761.956281ms" podCreationTimestamp="2026-03-13 01:16:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:16:45.754957922 +0000 UTC m=+265.026637326" watchObservedRunningTime="2026-03-13 01:16:45.761956281 +0000 UTC m=+265.033635735" Mar 13 01:16:45.802007 master-0 kubenswrapper[7599]: I0313 01:16:45.801012 7599 scope.go:117] "RemoveContainer" containerID="e36d289d22f168d7dd54b3be83741c3fa40edda0e8989b419788c91296bea849" Mar 13 01:16:45.847013 master-0 kubenswrapper[7599]: I0313 01:16:45.846393 7599 scope.go:117] "RemoveContainer" containerID="6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173" Mar 13 01:16:45.849489 master-0 kubenswrapper[7599]: E0313 01:16:45.848179 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173\": container with ID starting with 6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173 not found: ID does not exist" containerID="6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173" Mar 13 01:16:45.849489 master-0 kubenswrapper[7599]: I0313 01:16:45.848248 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173"} err="failed to get container status \"6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173\": rpc error: code = NotFound desc = could not find container \"6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173\": container with ID starting with 6e1039ec0e9a81433d877e76f7d73b03fe4985a28410751f0e01f79a01039173 not found: ID does not exist" Mar 13 01:16:45.849489 master-0 kubenswrapper[7599]: I0313 01:16:45.848285 7599 scope.go:117] "RemoveContainer" containerID="22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff" Mar 13 01:16:45.849489 master-0 kubenswrapper[7599]: E0313 01:16:45.848934 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff\": container with ID starting with 22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff not found: ID does not exist" containerID="22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff" Mar 13 01:16:45.849489 master-0 kubenswrapper[7599]: I0313 01:16:45.848980 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff"} err="failed to get container status \"22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff\": rpc error: code = NotFound desc = could not find container \"22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff\": container with ID starting with 22ee85a234e9b33abd5d15c70ccbaeb59a4d2aa68d38ed87c0222968fd2562ff not found: ID does not exist" Mar 13 01:16:45.849489 master-0 kubenswrapper[7599]: I0313 01:16:45.849013 7599 scope.go:117] "RemoveContainer" containerID="9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d" Mar 13 01:16:45.852252 master-0 kubenswrapper[7599]: E0313 01:16:45.850929 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d\": container with ID starting with 9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d not found: ID does not exist" containerID="9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d" Mar 13 01:16:45.852252 master-0 kubenswrapper[7599]: I0313 01:16:45.851161 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d"} err="failed to get container status \"9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d\": rpc error: code = NotFound desc = could not find container \"9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d\": container with ID starting with 9e27f81717e01415c01190c10849d2480231eacde82b8bf8ec6158732cd66f0d not found: ID does not exist" Mar 13 01:16:45.852252 master-0 kubenswrapper[7599]: I0313 01:16:45.851190 7599 scope.go:117] "RemoveContainer" containerID="f73c75626f2b8420b208819100f67cc78e1afc63da934e6341110ce6fd48cd90" Mar 13 01:16:46.001776 master-0 kubenswrapper[7599]: I0313 01:16:46.001654 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:16:46.002321 master-0 kubenswrapper[7599]: I0313 01:16:46.002294 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:16:46.293613 master-0 kubenswrapper[7599]: I0313 01:16:46.290637 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" podStartSLOduration=201.376625603 podStartE2EDuration="3m28.290621364s" podCreationTimestamp="2026-03-13 01:13:18 +0000 UTC" firstStartedPulling="2026-03-13 01:13:33.846881486 +0000 UTC m=+73.118560870" lastFinishedPulling="2026-03-13 01:13:40.760877237 +0000 UTC m=+80.032556631" observedRunningTime="2026-03-13 01:16:46.248809567 +0000 UTC m=+265.520488951" watchObservedRunningTime="2026-03-13 01:16:46.290621364 +0000 UTC m=+265.562300758" Mar 13 01:16:46.334535 master-0 kubenswrapper[7599]: I0313 01:16:46.333917 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-1-master-0"] Mar 13 01:16:46.738924 master-0 kubenswrapper[7599]: I0313 01:16:46.738855 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/0.log" Mar 13 01:16:46.739590 master-0 kubenswrapper[7599]: I0313 01:16:46.739200 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" event={"ID":"81835d51-a414-440f-889b-690561e98d6a","Type":"ContainerStarted","Data":"5a44ac8efb09ea69fddd87bdea34d5c96b816c25b6e79670f14b1432f959ff9a"} Mar 13 01:16:46.740333 master-0 kubenswrapper[7599]: I0313 01:16:46.740298 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:16:46.744265 master-0 kubenswrapper[7599]: I0313 01:16:46.744224 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" event={"ID":"f78c05e1499b533b83f091333d61f045","Type":"ContainerStarted","Data":"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8"} Mar 13 01:16:46.746768 master-0 kubenswrapper[7599]: I0313 01:16:46.746687 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/0.log" Mar 13 01:16:46.746768 master-0 kubenswrapper[7599]: I0313 01:16:46.746755 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerStarted","Data":"28e210a816437ccb443c8d6a143794ae992a561c368c609a20f38e48757f3d85"} Mar 13 01:16:46.749787 master-0 kubenswrapper[7599]: I0313 01:16:46.749746 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" event={"ID":"8c377a67-e763-4925-afae-a7f8546a369b","Type":"ContainerStarted","Data":"3823a1546dde2a6cc4ddf8e1b66df5b62407e5907786e28efbf8762481ad427e"} Mar 13 01:16:46.751986 master-0 kubenswrapper[7599]: I0313 01:16:46.751938 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/2.log" Mar 13 01:16:46.752105 master-0 kubenswrapper[7599]: I0313 01:16:46.752064 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" event={"ID":"74efa52b-fd97-418a-9a44-914442633f74","Type":"ContainerStarted","Data":"8adf888b29ba0073a860009e3a825c5df5fe0a39c41d17b6817f64e6c6ba0498"} Mar 13 01:16:46.754548 master-0 kubenswrapper[7599]: I0313 01:16:46.754489 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerStarted","Data":"ba3486eb82a9ab1039bbc9db6456f118857b681bba7748f0325d9592ed3693f6"} Mar 13 01:16:46.762761 master-0 kubenswrapper[7599]: I0313 01:16:46.762706 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-n4252_07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/manager/0.log" Mar 13 01:16:46.763080 master-0 kubenswrapper[7599]: I0313 01:16:46.763045 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" event={"ID":"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e","Type":"ContainerStarted","Data":"b54252f16f5fb3f714b95f360cc3679cec5204f01eb6fa38a3bb6001419c1a68"} Mar 13 01:16:46.763536 master-0 kubenswrapper[7599]: I0313 01:16:46.763216 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:16:46.768387 master-0 kubenswrapper[7599]: I0313 01:16:46.768328 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" event={"ID":"fde89b0b-7133-4b97-9e35-51c0382bd366","Type":"ContainerStarted","Data":"ebd6da93170865004ea338c353461cb60b3578623ca47e85a3f0dc63144a1798"} Mar 13 01:16:46.772450 master-0 kubenswrapper[7599]: I0313 01:16:46.772420 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-p5c8r_75a53c09-210a-4346-99b0-a632b9e0a3c9/ingress-operator/0.log" Mar 13 01:16:46.772607 master-0 kubenswrapper[7599]: I0313 01:16:46.772497 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" event={"ID":"75a53c09-210a-4346-99b0-a632b9e0a3c9","Type":"ContainerStarted","Data":"d62717ed0b3b0a598bee89eda4b3c852bd735f76c6409b93a8625746be0d6720"} Mar 13 01:16:46.779286 master-0 kubenswrapper[7599]: I0313 01:16:46.777688 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/2.log" Mar 13 01:16:46.779286 master-0 kubenswrapper[7599]: I0313 01:16:46.777760 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" event={"ID":"b5757329-8692-4719-b3c7-b5df78110fcf","Type":"ContainerStarted","Data":"47bb236a4d2e2827dd7eae25fd1186ce4ea4e1312020194ccde0f80f546da163"} Mar 13 01:16:46.788878 master-0 kubenswrapper[7599]: I0313 01:16:46.788745 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7f46d696f9-s9d6s"] Mar 13 01:16:46.789157 master-0 kubenswrapper[7599]: E0313 01:16:46.789121 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c32e816-aa69-4e9c-9fbf-56595c764f3b" containerName="installer" Mar 13 01:16:46.789157 master-0 kubenswrapper[7599]: I0313 01:16:46.789148 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c32e816-aa69-4e9c-9fbf-56595c764f3b" containerName="installer" Mar 13 01:16:46.789243 master-0 kubenswrapper[7599]: E0313 01:16:46.789171 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94588bf1-f4cd-4446-999e-0039539e65a5" containerName="installer" Mar 13 01:16:46.789243 master-0 kubenswrapper[7599]: I0313 01:16:46.789183 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="94588bf1-f4cd-4446-999e-0039539e65a5" containerName="installer" Mar 13 01:16:46.789243 master-0 kubenswrapper[7599]: E0313 01:16:46.789194 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerName="installer" Mar 13 01:16:46.789243 master-0 kubenswrapper[7599]: I0313 01:16:46.789204 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerName="installer" Mar 13 01:16:46.789243 master-0 kubenswrapper[7599]: E0313 01:16:46.789230 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95849efd-fabc-4e21-82e1-a15bc6eee2ba" containerName="controller-manager" Mar 13 01:16:46.789243 master-0 kubenswrapper[7599]: I0313 01:16:46.789240 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="95849efd-fabc-4e21-82e1-a15bc6eee2ba" containerName="controller-manager" Mar 13 01:16:46.789428 master-0 kubenswrapper[7599]: E0313 01:16:46.789258 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3a666ab-7b35-463e-b5fa-ecaa147296e8" containerName="route-controller-manager" Mar 13 01:16:46.789428 master-0 kubenswrapper[7599]: I0313 01:16:46.789272 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3a666ab-7b35-463e-b5fa-ecaa147296e8" containerName="route-controller-manager" Mar 13 01:16:46.789428 master-0 kubenswrapper[7599]: I0313 01:16:46.789408 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="94588bf1-f4cd-4446-999e-0039539e65a5" containerName="installer" Mar 13 01:16:46.789576 master-0 kubenswrapper[7599]: I0313 01:16:46.789435 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3a666ab-7b35-463e-b5fa-ecaa147296e8" containerName="route-controller-manager" Mar 13 01:16:46.789576 master-0 kubenswrapper[7599]: I0313 01:16:46.789450 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c32e816-aa69-4e9c-9fbf-56595c764f3b" containerName="installer" Mar 13 01:16:46.789576 master-0 kubenswrapper[7599]: I0313 01:16:46.789466 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="95849efd-fabc-4e21-82e1-a15bc6eee2ba" containerName="controller-manager" Mar 13 01:16:46.789576 master-0 kubenswrapper[7599]: I0313 01:16:46.789478 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerName="installer" Mar 13 01:16:46.790065 master-0 kubenswrapper[7599]: I0313 01:16:46.790026 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"fdcd8438-d33f-490f-a841-8944c58506f8","Type":"ContainerStarted","Data":"263627f8d8439063ebce2b99f2d70b421aed9f9cb196a75460d6a6b14ebb0fe5"} Mar 13 01:16:46.790118 master-0 kubenswrapper[7599]: I0313 01:16:46.790069 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"fdcd8438-d33f-490f-a841-8944c58506f8","Type":"ContainerStarted","Data":"5036dd248963b083dbf679edea9371d4e006e42fcff4a71dbda91fde659408c6"} Mar 13 01:16:46.790201 master-0 kubenswrapper[7599]: I0313 01:16:46.790180 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:46.792238 master-0 kubenswrapper[7599]: I0313 01:16:46.792193 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:16:46.796098 master-0 kubenswrapper[7599]: I0313 01:16:46.796053 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4"] Mar 13 01:16:46.796380 master-0 kubenswrapper[7599]: I0313 01:16:46.796352 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:16:46.796678 master-0 kubenswrapper[7599]: I0313 01:16:46.796569 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:16:46.796742 master-0 kubenswrapper[7599]: I0313 01:16:46.796695 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:16:46.796902 master-0 kubenswrapper[7599]: I0313 01:16:46.796714 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:16:46.798226 master-0 kubenswrapper[7599]: I0313 01:16:46.798186 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" event={"ID":"8ad2a6d5-6edf-4840-89f9-47847c8dac05","Type":"ContainerStarted","Data":"e02d6b0ebe17533096e975a2adacfc3a6fe4916c67a536db59280d4d4877a458"} Mar 13 01:16:46.798291 master-0 kubenswrapper[7599]: I0313 01:16:46.798266 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:46.802229 master-0 kubenswrapper[7599]: I0313 01:16:46.802194 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 01:16:46.802860 master-0 kubenswrapper[7599]: I0313 01:16:46.802821 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 01:16:46.803131 master-0 kubenswrapper[7599]: I0313 01:16:46.803104 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 01:16:46.803281 master-0 kubenswrapper[7599]: I0313 01:16:46.803257 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 01:16:46.803452 master-0 kubenswrapper[7599]: I0313 01:16:46.803429 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 01:16:46.806755 master-0 kubenswrapper[7599]: I0313 01:16:46.806709 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4"] Mar 13 01:16:46.810866 master-0 kubenswrapper[7599]: I0313 01:16:46.810741 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:16:46.813728 master-0 kubenswrapper[7599]: I0313 01:16:46.813673 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f46d696f9-s9d6s"] Mar 13 01:16:46.928312 master-0 kubenswrapper[7599]: I0313 01:16:46.928159 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:46.930761 master-0 kubenswrapper[7599]: I0313 01:16:46.929720 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvrdt\" (UniqueName: \"kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:46.930842 master-0 kubenswrapper[7599]: I0313 01:16:46.930762 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgz5w\" (UniqueName: \"kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:46.930842 master-0 kubenswrapper[7599]: I0313 01:16:46.930793 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:46.930945 master-0 kubenswrapper[7599]: I0313 01:16:46.930910 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:46.932213 master-0 kubenswrapper[7599]: I0313 01:16:46.931102 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:46.932213 master-0 kubenswrapper[7599]: I0313 01:16:46.931847 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:46.935713 master-0 kubenswrapper[7599]: I0313 01:16:46.935270 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:46.935713 master-0 kubenswrapper[7599]: I0313 01:16:46.935355 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:46.994370 master-0 kubenswrapper[7599]: I0313 01:16:46.994311 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6c32e816-aa69-4e9c-9fbf-56595c764f3b" path="/var/lib/kubelet/pods/6c32e816-aa69-4e9c-9fbf-56595c764f3b/volumes" Mar 13 01:16:46.994946 master-0 kubenswrapper[7599]: I0313 01:16:46.994907 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94588bf1-f4cd-4446-999e-0039539e65a5" path="/var/lib/kubelet/pods/94588bf1-f4cd-4446-999e-0039539e65a5/volumes" Mar 13 01:16:46.996633 master-0 kubenswrapper[7599]: I0313 01:16:46.995570 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95849efd-fabc-4e21-82e1-a15bc6eee2ba" path="/var/lib/kubelet/pods/95849efd-fabc-4e21-82e1-a15bc6eee2ba/volumes" Mar 13 01:16:46.996633 master-0 kubenswrapper[7599]: I0313 01:16:46.996213 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3a666ab-7b35-463e-b5fa-ecaa147296e8" path="/var/lib/kubelet/pods/d3a666ab-7b35-463e-b5fa-ecaa147296e8/volumes" Mar 13 01:16:47.036860 master-0 kubenswrapper[7599]: I0313 01:16:47.036785 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.036860 master-0 kubenswrapper[7599]: I0313 01:16:47.036863 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.037178 master-0 kubenswrapper[7599]: I0313 01:16:47.036902 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.037178 master-0 kubenswrapper[7599]: I0313 01:16:47.036939 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrdt\" (UniqueName: \"kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.037178 master-0 kubenswrapper[7599]: I0313 01:16:47.036960 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgz5w\" (UniqueName: \"kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.037433 master-0 kubenswrapper[7599]: I0313 01:16:47.037392 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.037627 master-0 kubenswrapper[7599]: I0313 01:16:47.037612 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.037769 master-0 kubenswrapper[7599]: I0313 01:16:47.037753 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.038039 master-0 kubenswrapper[7599]: I0313 01:16:47.038011 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.039053 master-0 kubenswrapper[7599]: I0313 01:16:47.038962 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.039116 master-0 kubenswrapper[7599]: I0313 01:16:47.039069 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.039771 master-0 kubenswrapper[7599]: I0313 01:16:47.039495 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.039771 master-0 kubenswrapper[7599]: I0313 01:16:47.039626 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.041551 master-0 kubenswrapper[7599]: I0313 01:16:47.041519 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.041629 master-0 kubenswrapper[7599]: I0313 01:16:47.041554 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.042129 master-0 kubenswrapper[7599]: I0313 01:16:47.042076 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.052084 master-0 kubenswrapper[7599]: I0313 01:16:47.051994 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-1-master-0" podStartSLOduration=215.051974417 podStartE2EDuration="3m35.051974417s" podCreationTimestamp="2026-03-13 01:13:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:16:47.05006831 +0000 UTC m=+266.321747714" watchObservedRunningTime="2026-03-13 01:16:47.051974417 +0000 UTC m=+266.323653811" Mar 13 01:16:47.064585 master-0 kubenswrapper[7599]: I0313 01:16:47.064492 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgz5w\" (UniqueName: \"kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.069093 master-0 kubenswrapper[7599]: I0313 01:16:47.069053 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrdt\" (UniqueName: \"kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.125079 master-0 kubenswrapper[7599]: I0313 01:16:47.125008 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.137224 master-0 kubenswrapper[7599]: I0313 01:16:47.137154 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:47.535588 master-0 kubenswrapper[7599]: I0313 01:16:47.535469 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7f46d696f9-s9d6s"] Mar 13 01:16:47.542084 master-0 kubenswrapper[7599]: W0313 01:16:47.542010 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd477d4b0_8b36_4ff9_9b56_0e67709b1aa7.slice/crio-aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa WatchSource:0}: Error finding container aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa: Status 404 returned error can't find the container with id aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa Mar 13 01:16:47.614470 master-0 kubenswrapper[7599]: I0313 01:16:47.614394 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4"] Mar 13 01:16:47.626782 master-0 kubenswrapper[7599]: W0313 01:16:47.626692 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod581ff17d_f121_4ece_8e45_81f1f710d163.slice/crio-14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0 WatchSource:0}: Error finding container 14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0: Status 404 returned error can't find the container with id 14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0 Mar 13 01:16:47.809843 master-0 kubenswrapper[7599]: I0313 01:16:47.809754 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" event={"ID":"581ff17d-f121-4ece-8e45-81f1f710d163","Type":"ContainerStarted","Data":"14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0"} Mar 13 01:16:47.816492 master-0 kubenswrapper[7599]: I0313 01:16:47.816423 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" event={"ID":"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7","Type":"ContainerStarted","Data":"2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0"} Mar 13 01:16:47.816640 master-0 kubenswrapper[7599]: I0313 01:16:47.816504 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" event={"ID":"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7","Type":"ContainerStarted","Data":"aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa"} Mar 13 01:16:47.818872 master-0 kubenswrapper[7599]: I0313 01:16:47.818816 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:16:47.818942 master-0 kubenswrapper[7599]: I0313 01:16:47.818878 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:47.821049 master-0 kubenswrapper[7599]: I0313 01:16:47.820994 7599 patch_prober.go:28] interesting pod/controller-manager-7f46d696f9-s9d6s container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.128.0.64:8443/healthz\": dial tcp 10.128.0.64:8443: connect: connection refused" start-of-body= Mar 13 01:16:47.821156 master-0 kubenswrapper[7599]: I0313 01:16:47.821093 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" probeResult="failure" output="Get \"https://10.128.0.64:8443/healthz\": dial tcp 10.128.0.64:8443: connect: connection refused" Mar 13 01:16:47.822669 master-0 kubenswrapper[7599]: I0313 01:16:47.822567 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:16:47.882054 master-0 kubenswrapper[7599]: I0313 01:16:47.880196 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" podStartSLOduration=201.880161677 podStartE2EDuration="3m21.880161677s" podCreationTimestamp="2026-03-13 01:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:16:47.852557583 +0000 UTC m=+267.124237007" watchObservedRunningTime="2026-03-13 01:16:47.880161677 +0000 UTC m=+267.151841111" Mar 13 01:16:47.983345 master-0 kubenswrapper[7599]: I0313 01:16:47.983248 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:16:47.984097 master-0 kubenswrapper[7599]: I0313 01:16:47.984063 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:16:48.447195 master-0 kubenswrapper[7599]: I0313 01:16:48.447030 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9"] Mar 13 01:16:48.459617 master-0 kubenswrapper[7599]: W0313 01:16:48.459549 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2581e5b5_8cbb_4fa5_9888_98fb572a6232.slice/crio-65392793bd94fcb00daa4e5e0befa1cdc4621ed4d78484330a8ebe817e639598 WatchSource:0}: Error finding container 65392793bd94fcb00daa4e5e0befa1cdc4621ed4d78484330a8ebe817e639598: Status 404 returned error can't find the container with id 65392793bd94fcb00daa4e5e0befa1cdc4621ed4d78484330a8ebe817e639598 Mar 13 01:16:48.830305 master-0 kubenswrapper[7599]: I0313 01:16:48.830236 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" event={"ID":"2581e5b5-8cbb-4fa5-9888-98fb572a6232","Type":"ContainerStarted","Data":"4a206d17de41ac3bfed7fb1e2fb3fcab66732fbd00606bc85d5124426281ab87"} Mar 13 01:16:48.830305 master-0 kubenswrapper[7599]: I0313 01:16:48.830301 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" event={"ID":"2581e5b5-8cbb-4fa5-9888-98fb572a6232","Type":"ContainerStarted","Data":"65392793bd94fcb00daa4e5e0befa1cdc4621ed4d78484330a8ebe817e639598"} Mar 13 01:16:48.832412 master-0 kubenswrapper[7599]: I0313 01:16:48.832290 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" event={"ID":"581ff17d-f121-4ece-8e45-81f1f710d163","Type":"ContainerStarted","Data":"fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348"} Mar 13 01:16:48.834327 master-0 kubenswrapper[7599]: I0313 01:16:48.834266 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:48.842537 master-0 kubenswrapper[7599]: I0313 01:16:48.841840 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:16:48.847216 master-0 kubenswrapper[7599]: I0313 01:16:48.847170 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:16:48.861797 master-0 kubenswrapper[7599]: I0313 01:16:48.861702 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" podStartSLOduration=202.861677028 podStartE2EDuration="3m22.861677028s" podCreationTimestamp="2026-03-13 01:13:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:16:48.858556413 +0000 UTC m=+268.130235807" watchObservedRunningTime="2026-03-13 01:16:48.861677028 +0000 UTC m=+268.133356422" Mar 13 01:16:48.983737 master-0 kubenswrapper[7599]: I0313 01:16:48.982680 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:16:48.983737 master-0 kubenswrapper[7599]: I0313 01:16:48.983334 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:16:49.402373 master-0 kubenswrapper[7599]: I0313 01:16:49.402190 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-2-master-0"] Mar 13 01:16:49.406949 master-0 kubenswrapper[7599]: W0313 01:16:49.406872 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7106c6fe_7c8d_45b9_bc5c_521db743663f.slice/crio-61d228ad61217efd3f38e7f1eb742a8a47bf9f51d0ed1ddebcc51b7470bf905e WatchSource:0}: Error finding container 61d228ad61217efd3f38e7f1eb742a8a47bf9f51d0ed1ddebcc51b7470bf905e: Status 404 returned error can't find the container with id 61d228ad61217efd3f38e7f1eb742a8a47bf9f51d0ed1ddebcc51b7470bf905e Mar 13 01:16:49.845597 master-0 kubenswrapper[7599]: I0313 01:16:49.845411 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"7106c6fe-7c8d-45b9-bc5c-521db743663f","Type":"ContainerStarted","Data":"9dea5041e065ce99780170074cdc1fcbcd589815d7a4ea10ac0c5a7ebf2078b0"} Mar 13 01:16:49.845597 master-0 kubenswrapper[7599]: I0313 01:16:49.845589 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"7106c6fe-7c8d-45b9-bc5c-521db743663f","Type":"ContainerStarted","Data":"61d228ad61217efd3f38e7f1eb742a8a47bf9f51d0ed1ddebcc51b7470bf905e"} Mar 13 01:16:49.869682 master-0 kubenswrapper[7599]: I0313 01:16:49.869530 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-2-master-0" podStartSLOduration=209.869488302 podStartE2EDuration="3m29.869488302s" podCreationTimestamp="2026-03-13 01:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:16:49.869166355 +0000 UTC m=+269.140845799" watchObservedRunningTime="2026-03-13 01:16:49.869488302 +0000 UTC m=+269.141167696" Mar 13 01:16:49.983944 master-0 kubenswrapper[7599]: I0313 01:16:49.983797 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:16:49.983944 master-0 kubenswrapper[7599]: I0313 01:16:49.983860 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:16:49.984426 master-0 kubenswrapper[7599]: I0313 01:16:49.984347 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:16:49.984503 master-0 kubenswrapper[7599]: I0313 01:16:49.984352 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:16:50.072280 master-0 kubenswrapper[7599]: I0313 01:16:50.071758 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t88cc"] Mar 13 01:16:50.072280 master-0 kubenswrapper[7599]: I0313 01:16:50.072225 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-t88cc" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="registry-server" containerID="cri-o://b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215" gracePeriod=2 Mar 13 01:16:50.488395 master-0 kubenswrapper[7599]: I0313 01:16:50.487926 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d9nkp"] Mar 13 01:16:50.489565 master-0 kubenswrapper[7599]: I0313 01:16:50.489394 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.492284 master-0 kubenswrapper[7599]: I0313 01:16:50.491909 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-mbxd4" Mar 13 01:16:50.510088 master-0 kubenswrapper[7599]: I0313 01:16:50.510001 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9nkp"] Mar 13 01:16:50.510609 master-0 kubenswrapper[7599]: I0313 01:16:50.510546 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-utilities\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.510679 master-0 kubenswrapper[7599]: I0313 01:16:50.510615 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rtds\" (UniqueName: \"kubernetes.io/projected/6da2aac0-42a0-45c2-93ec-b148f5889e8b-kube-api-access-9rtds\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.510740 master-0 kubenswrapper[7599]: I0313 01:16:50.510674 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-catalog-content\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.615108 master-0 kubenswrapper[7599]: I0313 01:16:50.612711 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-utilities\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.615108 master-0 kubenswrapper[7599]: I0313 01:16:50.612785 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtds\" (UniqueName: \"kubernetes.io/projected/6da2aac0-42a0-45c2-93ec-b148f5889e8b-kube-api-access-9rtds\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.615108 master-0 kubenswrapper[7599]: I0313 01:16:50.612823 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-catalog-content\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.615108 master-0 kubenswrapper[7599]: I0313 01:16:50.613272 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-catalog-content\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.615108 master-0 kubenswrapper[7599]: I0313 01:16:50.614236 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-utilities\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.632167 master-0 kubenswrapper[7599]: I0313 01:16:50.632109 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtds\" (UniqueName: \"kubernetes.io/projected/6da2aac0-42a0-45c2-93ec-b148f5889e8b-kube-api-access-9rtds\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.817588 master-0 kubenswrapper[7599]: I0313 01:16:50.817540 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:16:50.868739 master-0 kubenswrapper[7599]: I0313 01:16:50.868677 7599 generic.go:334] "Generic (PLEG): container finished" podID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerID="b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215" exitCode=0 Mar 13 01:16:50.872955 master-0 kubenswrapper[7599]: I0313 01:16:50.869641 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerDied","Data":"b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215"} Mar 13 01:16:50.953574 master-0 kubenswrapper[7599]: E0313 01:16:50.953462 7599 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215 is running failed: container process not found" containerID="b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 01:16:50.955644 master-0 kubenswrapper[7599]: E0313 01:16:50.955593 7599 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215 is running failed: container process not found" containerID="b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 01:16:50.955943 master-0 kubenswrapper[7599]: E0313 01:16:50.955900 7599 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215 is running failed: container process not found" containerID="b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 01:16:50.955997 master-0 kubenswrapper[7599]: E0313 01:16:50.955946 7599 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-t88cc" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="registry-server" Mar 13 01:16:50.973782 master-0 kubenswrapper[7599]: I0313 01:16:50.973742 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:16:50.982806 master-0 kubenswrapper[7599]: I0313 01:16:50.982760 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:16:50.982884 master-0 kubenswrapper[7599]: I0313 01:16:50.982775 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:16:50.982990 master-0 kubenswrapper[7599]: I0313 01:16:50.982936 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:16:50.983501 master-0 kubenswrapper[7599]: I0313 01:16:50.983468 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:16:50.983685 master-0 kubenswrapper[7599]: I0313 01:16:50.983649 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:16:50.986740 master-0 kubenswrapper[7599]: I0313 01:16:50.984069 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:16:51.021817 master-0 kubenswrapper[7599]: I0313 01:16:51.021755 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-catalog-content\") pod \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " Mar 13 01:16:51.021971 master-0 kubenswrapper[7599]: I0313 01:16:51.021880 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-utilities\") pod \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " Mar 13 01:16:51.021971 master-0 kubenswrapper[7599]: I0313 01:16:51.021938 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgzzr\" (UniqueName: \"kubernetes.io/projected/c6382e2a-ec14-4457-8f26-3087b19d1e1a-kube-api-access-pgzzr\") pod \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\" (UID: \"c6382e2a-ec14-4457-8f26-3087b19d1e1a\") " Mar 13 01:16:51.024084 master-0 kubenswrapper[7599]: I0313 01:16:51.024046 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-utilities" (OuterVolumeSpecName: "utilities") pod "c6382e2a-ec14-4457-8f26-3087b19d1e1a" (UID: "c6382e2a-ec14-4457-8f26-3087b19d1e1a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:51.042177 master-0 kubenswrapper[7599]: I0313 01:16:51.042113 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6382e2a-ec14-4457-8f26-3087b19d1e1a-kube-api-access-pgzzr" (OuterVolumeSpecName: "kube-api-access-pgzzr") pod "c6382e2a-ec14-4457-8f26-3087b19d1e1a" (UID: "c6382e2a-ec14-4457-8f26-3087b19d1e1a"). InnerVolumeSpecName "kube-api-access-pgzzr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:16:51.126147 master-0 kubenswrapper[7599]: I0313 01:16:51.126090 7599 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-utilities\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:51.126147 master-0 kubenswrapper[7599]: I0313 01:16:51.126136 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgzzr\" (UniqueName: \"kubernetes.io/projected/c6382e2a-ec14-4457-8f26-3087b19d1e1a-kube-api-access-pgzzr\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:51.146766 master-0 kubenswrapper[7599]: I0313 01:16:51.140290 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d9nkp"] Mar 13 01:16:51.208548 master-0 kubenswrapper[7599]: I0313 01:16:51.202753 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-insights/insights-operator-8f89dfddd-hn4jh"] Mar 13 01:16:51.267681 master-0 kubenswrapper[7599]: I0313 01:16:51.263751 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-4-master-0"] Mar 13 01:16:51.445571 master-0 kubenswrapper[7599]: I0313 01:16:51.445497 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm"] Mar 13 01:16:51.458622 master-0 kubenswrapper[7599]: I0313 01:16:51.458466 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c6382e2a-ec14-4457-8f26-3087b19d1e1a" (UID: "c6382e2a-ec14-4457-8f26-3087b19d1e1a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:51.461142 master-0 kubenswrapper[7599]: I0313 01:16:51.461090 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt"] Mar 13 01:16:51.477928 master-0 kubenswrapper[7599]: I0313 01:16:51.477787 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnmjr"] Mar 13 01:16:51.478379 master-0 kubenswrapper[7599]: I0313 01:16:51.478351 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-xnmjr" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="registry-server" containerID="cri-o://5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58" gracePeriod=2 Mar 13 01:16:51.540842 master-0 kubenswrapper[7599]: I0313 01:16:51.540670 7599 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c6382e2a-ec14-4457-8f26-3087b19d1e1a-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:51.710581 master-0 kubenswrapper[7599]: I0313 01:16:51.710435 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg"] Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: I0313 01:16:51.882760 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-64xrl"] Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: E0313 01:16:51.883024 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="extract-content" Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: I0313 01:16:51.883041 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="extract-content" Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: E0313 01:16:51.883060 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="extract-utilities" Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: I0313 01:16:51.883070 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="extract-utilities" Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: E0313 01:16:51.883088 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="registry-server" Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: I0313 01:16:51.883096 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="registry-server" Mar 13 01:16:51.883292 master-0 kubenswrapper[7599]: I0313 01:16:51.883224 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" containerName="registry-server" Mar 13 01:16:51.884523 master-0 kubenswrapper[7599]: I0313 01:16:51.884332 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:51.888906 master-0 kubenswrapper[7599]: I0313 01:16:51.886885 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wrgnw" Mar 13 01:16:51.894616 master-0 kubenswrapper[7599]: I0313 01:16:51.892268 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" event={"ID":"6e799871-735a-44e8-8193-24c5bb388928","Type":"ContainerStarted","Data":"b61dc113f1a4bef80c641546e2474c72c189dd507d27eb4f40039500f234ba15"} Mar 13 01:16:51.894616 master-0 kubenswrapper[7599]: I0313 01:16:51.892792 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64xrl"] Mar 13 01:16:51.894616 master-0 kubenswrapper[7599]: I0313 01:16:51.893599 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerStarted","Data":"4ca67e8bef4478f002e4442f5b186c7d786535b25d6573f50f3d477a22f7f668"} Mar 13 01:16:51.895858 master-0 kubenswrapper[7599]: I0313 01:16:51.895836 7599 generic.go:334] "Generic (PLEG): container finished" podID="6da2aac0-42a0-45c2-93ec-b148f5889e8b" containerID="1e251dae2aaa8815d73b243c1cd351484535753e760cb3f4fe039313f2622d66" exitCode=0 Mar 13 01:16:51.896351 master-0 kubenswrapper[7599]: I0313 01:16:51.895938 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nkp" event={"ID":"6da2aac0-42a0-45c2-93ec-b148f5889e8b","Type":"ContainerDied","Data":"1e251dae2aaa8815d73b243c1cd351484535753e760cb3f4fe039313f2622d66"} Mar 13 01:16:51.896449 master-0 kubenswrapper[7599]: I0313 01:16:51.896435 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nkp" event={"ID":"6da2aac0-42a0-45c2-93ec-b148f5889e8b","Type":"ContainerStarted","Data":"21da7cd9c215e50e56d0756a974eda56d485e36242a9ade62bb96f7d9a66d36e"} Mar 13 01:16:51.900770 master-0 kubenswrapper[7599]: I0313 01:16:51.900751 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-t88cc" event={"ID":"c6382e2a-ec14-4457-8f26-3087b19d1e1a","Type":"ContainerDied","Data":"7137ef449e547dd401ee27f3f443a2af47f35fca54a0b207f6e8c71de0c42b56"} Mar 13 01:16:51.901469 master-0 kubenswrapper[7599]: I0313 01:16:51.901446 7599 scope.go:117] "RemoveContainer" containerID="b4cdc11d5f0882da857b10fe0fe74418d4a32a2b0df43c7237b14125bc8a4215" Mar 13 01:16:51.901782 master-0 kubenswrapper[7599]: I0313 01:16:51.901713 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-t88cc" Mar 13 01:16:51.914808 master-0 kubenswrapper[7599]: I0313 01:16:51.914745 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" event={"ID":"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0","Type":"ContainerStarted","Data":"9d5e008bf9f6b695cb5f727240a0c351d82558f527dcc2602815400da2d730f6"} Mar 13 01:16:51.915867 master-0 kubenswrapper[7599]: I0313 01:16:51.915840 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" event={"ID":"65ef9aae-25a5-46c6-adf3-634f8f7a29bc","Type":"ContainerStarted","Data":"9ed0f2af24dce87330ff074848aa9e663492193136113ddae19217ced58912fa"} Mar 13 01:16:51.919772 master-0 kubenswrapper[7599]: I0313 01:16:51.919713 7599 generic.go:334] "Generic (PLEG): container finished" podID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerID="5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58" exitCode=0 Mar 13 01:16:51.919906 master-0 kubenswrapper[7599]: I0313 01:16:51.919877 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnmjr" event={"ID":"39bfb7e2-d1a8-4791-a52e-72f2b4790f96","Type":"ContainerDied","Data":"5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58"} Mar 13 01:16:51.923032 master-0 kubenswrapper[7599]: I0313 01:16:51.922990 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90","Type":"ContainerStarted","Data":"c0b9c0cf7cb9fa1122b0ea7980af02b767737d56971625a4ab2e9432fd86c393"} Mar 13 01:16:51.926067 master-0 kubenswrapper[7599]: I0313 01:16:51.926026 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" event={"ID":"2581e5b5-8cbb-4fa5-9888-98fb572a6232","Type":"ContainerStarted","Data":"eda1ec8422c99e6e857e30ff5f7242c30cd61a7c65794b59ce002e899da835a5"} Mar 13 01:16:51.939005 master-0 kubenswrapper[7599]: E0313 01:16:51.938934 7599 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58 is running failed: container process not found" containerID="5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 01:16:51.939743 master-0 kubenswrapper[7599]: E0313 01:16:51.939708 7599 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58 is running failed: container process not found" containerID="5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 01:16:51.940194 master-0 kubenswrapper[7599]: E0313 01:16:51.940145 7599 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58 is running failed: container process not found" containerID="5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58" cmd=["grpc_health_probe","-addr=:50051"] Mar 13 01:16:51.940291 master-0 kubenswrapper[7599]: E0313 01:16:51.940258 7599 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-xnmjr" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="registry-server" Mar 13 01:16:51.963848 master-0 kubenswrapper[7599]: I0313 01:16:51.963807 7599 scope.go:117] "RemoveContainer" containerID="ab0441da017b242d280ba9219e193f7d2acb102387dc5709f3d4ed81eb17fad9" Mar 13 01:16:51.973534 master-0 kubenswrapper[7599]: I0313 01:16:51.973324 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" podStartSLOduration=203.860039173 podStartE2EDuration="3m25.973292194s" podCreationTimestamp="2026-03-13 01:13:26 +0000 UTC" firstStartedPulling="2026-03-13 01:16:48.595632105 +0000 UTC m=+267.867311499" lastFinishedPulling="2026-03-13 01:16:50.708885136 +0000 UTC m=+269.980564520" observedRunningTime="2026-03-13 01:16:51.962930233 +0000 UTC m=+271.234609627" watchObservedRunningTime="2026-03-13 01:16:51.973292194 +0000 UTC m=+271.244971618" Mar 13 01:16:51.974966 master-0 kubenswrapper[7599]: I0313 01:16:51.974699 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:16:51.983328 master-0 kubenswrapper[7599]: I0313 01:16:51.983260 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:16:51.983405 master-0 kubenswrapper[7599]: I0313 01:16:51.983339 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:16:51.984142 master-0 kubenswrapper[7599]: I0313 01:16:51.984074 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:16:51.984142 master-0 kubenswrapper[7599]: I0313 01:16:51.984099 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:16:51.995266 master-0 kubenswrapper[7599]: I0313 01:16:51.995232 7599 scope.go:117] "RemoveContainer" containerID="e1fcc52d488ce48143ce55b0912ced806f3b7c7c5405ad16801b3c8761538abc" Mar 13 01:16:51.998075 master-0 kubenswrapper[7599]: I0313 01:16:51.997640 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-t88cc"] Mar 13 01:16:52.002448 master-0 kubenswrapper[7599]: I0313 01:16:52.002395 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-t88cc"] Mar 13 01:16:52.046185 master-0 kubenswrapper[7599]: I0313 01:16:52.046136 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44dmt\" (UniqueName: \"kubernetes.io/projected/9863f7ff-4c8d-42a3-a822-01697cf9c920-kube-api-access-44dmt\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.046928 master-0 kubenswrapper[7599]: I0313 01:16:52.046865 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-catalog-content\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.047289 master-0 kubenswrapper[7599]: I0313 01:16:52.047250 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-utilities\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.148468 master-0 kubenswrapper[7599]: I0313 01:16:52.148175 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-catalog-content\") pod \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " Mar 13 01:16:52.148686 master-0 kubenswrapper[7599]: I0313 01:16:52.148656 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zw474\" (UniqueName: \"kubernetes.io/projected/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-kube-api-access-zw474\") pod \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " Mar 13 01:16:52.148751 master-0 kubenswrapper[7599]: I0313 01:16:52.148725 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-utilities\") pod \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\" (UID: \"39bfb7e2-d1a8-4791-a52e-72f2b4790f96\") " Mar 13 01:16:52.149910 master-0 kubenswrapper[7599]: I0313 01:16:52.149605 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44dmt\" (UniqueName: \"kubernetes.io/projected/9863f7ff-4c8d-42a3-a822-01697cf9c920-kube-api-access-44dmt\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.149910 master-0 kubenswrapper[7599]: I0313 01:16:52.149690 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-catalog-content\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.149910 master-0 kubenswrapper[7599]: I0313 01:16:52.149734 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-utilities\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.151266 master-0 kubenswrapper[7599]: I0313 01:16:52.151216 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-utilities\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.175413 master-0 kubenswrapper[7599]: I0313 01:16:52.175361 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-utilities" (OuterVolumeSpecName: "utilities") pod "39bfb7e2-d1a8-4791-a52e-72f2b4790f96" (UID: "39bfb7e2-d1a8-4791-a52e-72f2b4790f96"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:52.175791 master-0 kubenswrapper[7599]: I0313 01:16:52.175717 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-catalog-content\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.177037 master-0 kubenswrapper[7599]: I0313 01:16:52.176983 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-kube-api-access-zw474" (OuterVolumeSpecName: "kube-api-access-zw474") pod "39bfb7e2-d1a8-4791-a52e-72f2b4790f96" (UID: "39bfb7e2-d1a8-4791-a52e-72f2b4790f96"). InnerVolumeSpecName "kube-api-access-zw474". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:16:52.196876 master-0 kubenswrapper[7599]: I0313 01:16:52.195929 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44dmt\" (UniqueName: \"kubernetes.io/projected/9863f7ff-4c8d-42a3-a822-01697cf9c920-kube-api-access-44dmt\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.209966 master-0 kubenswrapper[7599]: I0313 01:16:52.209892 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:16:52.215932 master-0 kubenswrapper[7599]: I0313 01:16:52.215853 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39bfb7e2-d1a8-4791-a52e-72f2b4790f96" (UID: "39bfb7e2-d1a8-4791-a52e-72f2b4790f96"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:52.251080 master-0 kubenswrapper[7599]: I0313 01:16:52.251037 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zw474\" (UniqueName: \"kubernetes.io/projected/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-kube-api-access-zw474\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:52.251080 master-0 kubenswrapper[7599]: I0313 01:16:52.251077 7599 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-utilities\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:52.251080 master-0 kubenswrapper[7599]: I0313 01:16:52.251087 7599 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39bfb7e2-d1a8-4791-a52e-72f2b4790f96-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:52.655120 master-0 kubenswrapper[7599]: I0313 01:16:52.653124 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s"] Mar 13 01:16:52.670713 master-0 kubenswrapper[7599]: I0313 01:16:52.670662 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6"] Mar 13 01:16:52.672286 master-0 kubenswrapper[7599]: W0313 01:16:52.672239 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod65dd1dc7_1b90_40f6_82c9_dee90a1fa852.slice/crio-6d6b932de4337ed7b1b29feb31dfecf2b00d8a0c27165dce010504a3cf2e5f0a WatchSource:0}: Error finding container 6d6b932de4337ed7b1b29feb31dfecf2b00d8a0c27165dce010504a3cf2e5f0a: Status 404 returned error can't find the container with id 6d6b932de4337ed7b1b29feb31dfecf2b00d8a0c27165dce010504a3cf2e5f0a Mar 13 01:16:52.674562 master-0 kubenswrapper[7599]: I0313 01:16:52.674090 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-64xrl"] Mar 13 01:16:52.681351 master-0 kubenswrapper[7599]: W0313 01:16:52.681297 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod56e20b21_ba17_46ae_a740_0e7bd45eae5f.slice/crio-7d990bd61a1e1a51f37259ece6d1d14af9e817f84717aafe9cfddf2f1cc1af71 WatchSource:0}: Error finding container 7d990bd61a1e1a51f37259ece6d1d14af9e817f84717aafe9cfddf2f1cc1af71: Status 404 returned error can't find the container with id 7d990bd61a1e1a51f37259ece6d1d14af9e817f84717aafe9cfddf2f1cc1af71 Mar 13 01:16:52.681728 master-0 kubenswrapper[7599]: W0313 01:16:52.681701 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9863f7ff_4c8d_42a3_a822_01697cf9c920.slice/crio-f90e074f8ab2848261d7ebd8ff2e240e768ffdf256ecd5a6670700d24212e960 WatchSource:0}: Error finding container f90e074f8ab2848261d7ebd8ff2e240e768ffdf256ecd5a6670700d24212e960: Status 404 returned error can't find the container with id f90e074f8ab2848261d7ebd8ff2e240e768ffdf256ecd5a6670700d24212e960 Mar 13 01:16:52.875760 master-0 kubenswrapper[7599]: I0313 01:16:52.875694 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzlpt"] Mar 13 01:16:52.876039 master-0 kubenswrapper[7599]: I0313 01:16:52.875999 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-jzlpt" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="registry-server" containerID="cri-o://400c82d44d8e2549c63519241a4fc52c8892085f2c7319dde110c4565e584937" gracePeriod=2 Mar 13 01:16:52.949441 master-0 kubenswrapper[7599]: I0313 01:16:52.949338 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" event={"ID":"56e20b21-ba17-46ae-a740-0e7bd45eae5f","Type":"ContainerStarted","Data":"7d990bd61a1e1a51f37259ece6d1d14af9e817f84717aafe9cfddf2f1cc1af71"} Mar 13 01:16:52.952181 master-0 kubenswrapper[7599]: I0313 01:16:52.952142 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90","Type":"ContainerStarted","Data":"7b8fcf0165d80adda60451116dbf0d6712f4aa8b3cf335302becbea472ed8b9a"} Mar 13 01:16:52.956712 master-0 kubenswrapper[7599]: I0313 01:16:52.956414 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64xrl" event={"ID":"9863f7ff-4c8d-42a3-a822-01697cf9c920","Type":"ContainerStarted","Data":"f90e074f8ab2848261d7ebd8ff2e240e768ffdf256ecd5a6670700d24212e960"} Mar 13 01:16:52.960154 master-0 kubenswrapper[7599]: I0313 01:16:52.959497 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" event={"ID":"65dd1dc7-1b90-40f6-82c9-dee90a1fa852","Type":"ContainerStarted","Data":"343cb58ab8417bb484f95db47390646ce3098a40fd9a7632d9c63f79a16bfaa3"} Mar 13 01:16:52.960154 master-0 kubenswrapper[7599]: I0313 01:16:52.959573 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" event={"ID":"65dd1dc7-1b90-40f6-82c9-dee90a1fa852","Type":"ContainerStarted","Data":"6d6b932de4337ed7b1b29feb31dfecf2b00d8a0c27165dce010504a3cf2e5f0a"} Mar 13 01:16:52.965274 master-0 kubenswrapper[7599]: I0313 01:16:52.965191 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-xnmjr" event={"ID":"39bfb7e2-d1a8-4791-a52e-72f2b4790f96","Type":"ContainerDied","Data":"16702ae6bf55253a1d4eab890d7c44c135c95ffb1a9130b6d582c2d745d25c4a"} Mar 13 01:16:52.965274 master-0 kubenswrapper[7599]: I0313 01:16:52.965238 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-xnmjr" Mar 13 01:16:52.965274 master-0 kubenswrapper[7599]: I0313 01:16:52.965251 7599 scope.go:117] "RemoveContainer" containerID="5eb3c5046b5b35ae52c94cb4015cec80768772841da2dec679dc879be8e7cb58" Mar 13 01:16:52.975060 master-0 kubenswrapper[7599]: I0313 01:16:52.974975 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-4-master-0" podStartSLOduration=218.97495241 podStartE2EDuration="3m38.97495241s" podCreationTimestamp="2026-03-13 01:13:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:16:52.971406145 +0000 UTC m=+272.243085549" watchObservedRunningTime="2026-03-13 01:16:52.97495241 +0000 UTC m=+272.246631804" Mar 13 01:16:52.999923 master-0 kubenswrapper[7599]: I0313 01:16:52.999493 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c6382e2a-ec14-4457-8f26-3087b19d1e1a" path="/var/lib/kubelet/pods/c6382e2a-ec14-4457-8f26-3087b19d1e1a/volumes" Mar 13 01:16:53.040375 master-0 kubenswrapper[7599]: I0313 01:16:53.040225 7599 scope.go:117] "RemoveContainer" containerID="bbd115c3920bc3d2b6483fd0c3c7e46a8152587c78c6bc52a5fe4a31a5ba7a98" Mar 13 01:16:53.044041 master-0 kubenswrapper[7599]: I0313 01:16:53.043910 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-xnmjr"] Mar 13 01:16:53.048337 master-0 kubenswrapper[7599]: I0313 01:16:53.048287 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-xnmjr"] Mar 13 01:16:53.067474 master-0 kubenswrapper[7599]: I0313 01:16:53.067379 7599 scope.go:117] "RemoveContainer" containerID="abf065e579740424bc4601bcbfcedea8ca832288e848753af66ad4e44ef4bf9f" Mar 13 01:16:53.223791 master-0 kubenswrapper[7599]: I0313 01:16:53.217474 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:16:53.223791 master-0 kubenswrapper[7599]: I0313 01:16:53.222260 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:16:53.280142 master-0 kubenswrapper[7599]: I0313 01:16:53.279828 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zglhp"] Mar 13 01:16:53.280142 master-0 kubenswrapper[7599]: E0313 01:16:53.280125 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="extract-utilities" Mar 13 01:16:53.280142 master-0 kubenswrapper[7599]: I0313 01:16:53.280144 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="extract-utilities" Mar 13 01:16:53.280142 master-0 kubenswrapper[7599]: E0313 01:16:53.280157 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="extract-content" Mar 13 01:16:53.280142 master-0 kubenswrapper[7599]: I0313 01:16:53.280167 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="extract-content" Mar 13 01:16:53.280142 master-0 kubenswrapper[7599]: E0313 01:16:53.280187 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="registry-server" Mar 13 01:16:53.280142 master-0 kubenswrapper[7599]: I0313 01:16:53.280194 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="registry-server" Mar 13 01:16:53.281832 master-0 kubenswrapper[7599]: I0313 01:16:53.280315 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" containerName="registry-server" Mar 13 01:16:53.281832 master-0 kubenswrapper[7599]: I0313 01:16:53.281317 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.286661 master-0 kubenswrapper[7599]: I0313 01:16:53.286635 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-m4df5" Mar 13 01:16:53.298306 master-0 kubenswrapper[7599]: I0313 01:16:53.298240 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zglhp"] Mar 13 01:16:53.403964 master-0 kubenswrapper[7599]: I0313 01:16:53.403880 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:16:53.475089 master-0 kubenswrapper[7599]: I0313 01:16:53.474950 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qq6v6\" (UniqueName: \"kubernetes.io/projected/9d2f93bd-e4ce-4ed2-b249-946338f753ed-kube-api-access-qq6v6\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.475089 master-0 kubenswrapper[7599]: I0313 01:16:53.475034 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-utilities\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.476062 master-0 kubenswrapper[7599]: I0313 01:16:53.476015 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-catalog-content\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.578088 master-0 kubenswrapper[7599]: I0313 01:16:53.578024 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-utilities\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.578421 master-0 kubenswrapper[7599]: I0313 01:16:53.578359 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-catalog-content\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.578624 master-0 kubenswrapper[7599]: I0313 01:16:53.578602 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6v6\" (UniqueName: \"kubernetes.io/projected/9d2f93bd-e4ce-4ed2-b249-946338f753ed-kube-api-access-qq6v6\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.579297 master-0 kubenswrapper[7599]: I0313 01:16:53.579070 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-utilities\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.579297 master-0 kubenswrapper[7599]: I0313 01:16:53.579111 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-catalog-content\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.599440 master-0 kubenswrapper[7599]: I0313 01:16:53.599391 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6v6\" (UniqueName: \"kubernetes.io/projected/9d2f93bd-e4ce-4ed2-b249-946338f753ed-kube-api-access-qq6v6\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.609620 master-0 kubenswrapper[7599]: I0313 01:16:53.609130 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:16:53.990794 master-0 kubenswrapper[7599]: I0313 01:16:53.990723 7599 generic.go:334] "Generic (PLEG): container finished" podID="9863f7ff-4c8d-42a3-a822-01697cf9c920" containerID="e6fb5566e61aacae6cae75fa3a8129afd169d9d82e676f7571f17acc0384df03" exitCode=0 Mar 13 01:16:53.991331 master-0 kubenswrapper[7599]: I0313 01:16:53.990849 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64xrl" event={"ID":"9863f7ff-4c8d-42a3-a822-01697cf9c920","Type":"ContainerDied","Data":"e6fb5566e61aacae6cae75fa3a8129afd169d9d82e676f7571f17acc0384df03"} Mar 13 01:16:53.998318 master-0 kubenswrapper[7599]: I0313 01:16:53.998253 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nkp" event={"ID":"6da2aac0-42a0-45c2-93ec-b148f5889e8b","Type":"ContainerStarted","Data":"e494bdc5d34f6d35be15c841021162373cc2a0a39223427d66e514de073d9457"} Mar 13 01:16:54.004296 master-0 kubenswrapper[7599]: I0313 01:16:54.004258 7599 generic.go:334] "Generic (PLEG): container finished" podID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerID="400c82d44d8e2549c63519241a4fc52c8892085f2c7319dde110c4565e584937" exitCode=0 Mar 13 01:16:54.004918 master-0 kubenswrapper[7599]: I0313 01:16:54.004890 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzlpt" event={"ID":"40c57f94-16b7-4011-bc29-386d52a06d2a","Type":"ContainerDied","Data":"400c82d44d8e2549c63519241a4fc52c8892085f2c7319dde110c4565e584937"} Mar 13 01:16:54.272050 master-0 kubenswrapper[7599]: I0313 01:16:54.271904 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mqtr"] Mar 13 01:16:54.272252 master-0 kubenswrapper[7599]: I0313 01:16:54.272218 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7mqtr" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="registry-server" containerID="cri-o://b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096" gracePeriod=2 Mar 13 01:16:54.608107 master-0 kubenswrapper[7599]: I0313 01:16:54.608066 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:16:54.684335 master-0 kubenswrapper[7599]: I0313 01:16:54.684187 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cx58l"] Mar 13 01:16:54.684335 master-0 kubenswrapper[7599]: E0313 01:16:54.684478 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="extract-utilities" Mar 13 01:16:54.688623 master-0 kubenswrapper[7599]: I0313 01:16:54.684493 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="extract-utilities" Mar 13 01:16:54.688623 master-0 kubenswrapper[7599]: E0313 01:16:54.684639 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="registry-server" Mar 13 01:16:54.688623 master-0 kubenswrapper[7599]: I0313 01:16:54.684665 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="registry-server" Mar 13 01:16:54.688623 master-0 kubenswrapper[7599]: E0313 01:16:54.684686 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="extract-content" Mar 13 01:16:54.688623 master-0 kubenswrapper[7599]: I0313 01:16:54.684693 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="extract-content" Mar 13 01:16:54.688623 master-0 kubenswrapper[7599]: I0313 01:16:54.684778 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" containerName="registry-server" Mar 13 01:16:54.688623 master-0 kubenswrapper[7599]: I0313 01:16:54.685578 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.689111 master-0 kubenswrapper[7599]: I0313 01:16:54.689056 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bbmgf" Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.704247 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpp24\" (UniqueName: \"kubernetes.io/projected/40c57f94-16b7-4011-bc29-386d52a06d2a-kube-api-access-dpp24\") pod \"40c57f94-16b7-4011-bc29-386d52a06d2a\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.704359 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-catalog-content\") pod \"40c57f94-16b7-4011-bc29-386d52a06d2a\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.704434 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-utilities\") pod \"40c57f94-16b7-4011-bc29-386d52a06d2a\" (UID: \"40c57f94-16b7-4011-bc29-386d52a06d2a\") " Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.704900 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-utilities\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.704946 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-catalog-content\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.705002 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvckz\" (UniqueName: \"kubernetes.io/projected/fb5dee36-70a4-47a4-afc2-d3209a476362-kube-api-access-mvckz\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.705339 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cx58l"] Mar 13 01:16:54.708142 master-0 kubenswrapper[7599]: I0313 01:16:54.707864 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-utilities" (OuterVolumeSpecName: "utilities") pod "40c57f94-16b7-4011-bc29-386d52a06d2a" (UID: "40c57f94-16b7-4011-bc29-386d52a06d2a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:54.710695 master-0 kubenswrapper[7599]: I0313 01:16:54.710494 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40c57f94-16b7-4011-bc29-386d52a06d2a-kube-api-access-dpp24" (OuterVolumeSpecName: "kube-api-access-dpp24") pod "40c57f94-16b7-4011-bc29-386d52a06d2a" (UID: "40c57f94-16b7-4011-bc29-386d52a06d2a"). InnerVolumeSpecName "kube-api-access-dpp24". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:16:54.796783 master-0 kubenswrapper[7599]: I0313 01:16:54.796654 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "40c57f94-16b7-4011-bc29-386d52a06d2a" (UID: "40c57f94-16b7-4011-bc29-386d52a06d2a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:54.805822 master-0 kubenswrapper[7599]: I0313 01:16:54.805731 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-utilities\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.805822 master-0 kubenswrapper[7599]: I0313 01:16:54.805772 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-catalog-content\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.805822 master-0 kubenswrapper[7599]: I0313 01:16:54.805802 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvckz\" (UniqueName: \"kubernetes.io/projected/fb5dee36-70a4-47a4-afc2-d3209a476362-kube-api-access-mvckz\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.805998 master-0 kubenswrapper[7599]: I0313 01:16:54.805870 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpp24\" (UniqueName: \"kubernetes.io/projected/40c57f94-16b7-4011-bc29-386d52a06d2a-kube-api-access-dpp24\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:54.805998 master-0 kubenswrapper[7599]: I0313 01:16:54.805884 7599 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:54.805998 master-0 kubenswrapper[7599]: I0313 01:16:54.805895 7599 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/40c57f94-16b7-4011-bc29-386d52a06d2a-utilities\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:54.806914 master-0 kubenswrapper[7599]: I0313 01:16:54.806875 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-catalog-content\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.807400 master-0 kubenswrapper[7599]: I0313 01:16:54.807337 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-utilities\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.822958 master-0 kubenswrapper[7599]: I0313 01:16:54.822930 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvckz\" (UniqueName: \"kubernetes.io/projected/fb5dee36-70a4-47a4-afc2-d3209a476362-kube-api-access-mvckz\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:54.999087 master-0 kubenswrapper[7599]: I0313 01:16:54.999006 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39bfb7e2-d1a8-4791-a52e-72f2b4790f96" path="/var/lib/kubelet/pods/39bfb7e2-d1a8-4791-a52e-72f2b4790f96/volumes" Mar 13 01:16:55.011108 master-0 kubenswrapper[7599]: I0313 01:16:55.011056 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:16:55.013115 master-0 kubenswrapper[7599]: I0313 01:16:55.013058 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-jzlpt" event={"ID":"40c57f94-16b7-4011-bc29-386d52a06d2a","Type":"ContainerDied","Data":"b499ba30f4ea8be865dc7a8837d7f5fa14f7ab7345bba4ad96fb42befea24a27"} Mar 13 01:16:55.013273 master-0 kubenswrapper[7599]: I0313 01:16:55.013140 7599 scope.go:117] "RemoveContainer" containerID="400c82d44d8e2549c63519241a4fc52c8892085f2c7319dde110c4565e584937" Mar 13 01:16:55.013273 master-0 kubenswrapper[7599]: I0313 01:16:55.013135 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-jzlpt" Mar 13 01:16:55.017444 master-0 kubenswrapper[7599]: I0313 01:16:55.017409 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:16:55.055341 master-0 kubenswrapper[7599]: I0313 01:16:55.055240 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-jzlpt"] Mar 13 01:16:55.061720 master-0 kubenswrapper[7599]: I0313 01:16:55.061563 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-jzlpt"] Mar 13 01:16:56.105580 master-0 kubenswrapper[7599]: I0313 01:16:56.105339 7599 scope.go:117] "RemoveContainer" containerID="077aaebe5d05ea235d4155fe2579604bd5aaa26272fc52bf8e69c62760433c36" Mar 13 01:16:56.641756 master-0 kubenswrapper[7599]: I0313 01:16:56.641705 7599 scope.go:117] "RemoveContainer" containerID="66ac2b182d8988508548db956904c7eb36936256dfbe1d0d938933e382dd821d" Mar 13 01:16:56.726217 master-0 kubenswrapper[7599]: I0313 01:16:56.726185 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:16:56.731790 master-0 kubenswrapper[7599]: I0313 01:16:56.731754 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-catalog-content\") pod \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " Mar 13 01:16:56.731790 master-0 kubenswrapper[7599]: I0313 01:16:56.731793 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-utilities\") pod \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " Mar 13 01:16:56.732009 master-0 kubenswrapper[7599]: I0313 01:16:56.731830 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnlk7\" (UniqueName: \"kubernetes.io/projected/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-kube-api-access-rnlk7\") pod \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\" (UID: \"9992615a-c49b-4ef0-b02b-c6cd2e719fa3\") " Mar 13 01:16:56.732898 master-0 kubenswrapper[7599]: I0313 01:16:56.732818 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-utilities" (OuterVolumeSpecName: "utilities") pod "9992615a-c49b-4ef0-b02b-c6cd2e719fa3" (UID: "9992615a-c49b-4ef0-b02b-c6cd2e719fa3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:56.735424 master-0 kubenswrapper[7599]: I0313 01:16:56.735070 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-kube-api-access-rnlk7" (OuterVolumeSpecName: "kube-api-access-rnlk7") pod "9992615a-c49b-4ef0-b02b-c6cd2e719fa3" (UID: "9992615a-c49b-4ef0-b02b-c6cd2e719fa3"). InnerVolumeSpecName "kube-api-access-rnlk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:16:56.766711 master-0 kubenswrapper[7599]: I0313 01:16:56.765812 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:16:56.783208 master-0 kubenswrapper[7599]: I0313 01:16:56.782470 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9992615a-c49b-4ef0-b02b-c6cd2e719fa3" (UID: "9992615a-c49b-4ef0-b02b-c6cd2e719fa3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:16:56.834063 master-0 kubenswrapper[7599]: I0313 01:16:56.833924 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnlk7\" (UniqueName: \"kubernetes.io/projected/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-kube-api-access-rnlk7\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:56.834063 master-0 kubenswrapper[7599]: I0313 01:16:56.833971 7599 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-catalog-content\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:56.834063 master-0 kubenswrapper[7599]: I0313 01:16:56.833983 7599 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9992615a-c49b-4ef0-b02b-c6cd2e719fa3-utilities\") on node \"master-0\" DevicePath \"\"" Mar 13 01:16:56.941381 master-0 kubenswrapper[7599]: I0313 01:16:56.941341 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:16:56.993147 master-0 kubenswrapper[7599]: I0313 01:16:56.992974 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40c57f94-16b7-4011-bc29-386d52a06d2a" path="/var/lib/kubelet/pods/40c57f94-16b7-4011-bc29-386d52a06d2a/volumes" Mar 13 01:16:57.000988 master-0 kubenswrapper[7599]: I0313 01:16:57.000828 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cx58l"] Mar 13 01:16:57.018815 master-0 kubenswrapper[7599]: W0313 01:16:57.018732 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb5dee36_70a4_47a4_afc2_d3209a476362.slice/crio-60159a917a34f7d64b3ba3a186dff388b89b7011483106eb857811a35e9e0fbb WatchSource:0}: Error finding container 60159a917a34f7d64b3ba3a186dff388b89b7011483106eb857811a35e9e0fbb: Status 404 returned error can't find the container with id 60159a917a34f7d64b3ba3a186dff388b89b7011483106eb857811a35e9e0fbb Mar 13 01:16:57.034573 master-0 kubenswrapper[7599]: I0313 01:16:57.033337 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zglhp"] Mar 13 01:16:57.039921 master-0 kubenswrapper[7599]: I0313 01:16:57.037429 7599 generic.go:334] "Generic (PLEG): container finished" podID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerID="b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096" exitCode=0 Mar 13 01:16:57.039921 master-0 kubenswrapper[7599]: I0313 01:16:57.037534 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7mqtr" Mar 13 01:16:57.039921 master-0 kubenswrapper[7599]: I0313 01:16:57.037537 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerDied","Data":"b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096"} Mar 13 01:16:57.039921 master-0 kubenswrapper[7599]: I0313 01:16:57.038946 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7mqtr" event={"ID":"9992615a-c49b-4ef0-b02b-c6cd2e719fa3","Type":"ContainerDied","Data":"731c43764bf6ca60ccb49818767715764d1313ac8e97ad985509652329db44a1"} Mar 13 01:16:57.039921 master-0 kubenswrapper[7599]: I0313 01:16:57.038985 7599 scope.go:117] "RemoveContainer" containerID="b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096" Mar 13 01:16:57.050058 master-0 kubenswrapper[7599]: I0313 01:16:57.049907 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" event={"ID":"65ef9aae-25a5-46c6-adf3-634f8f7a29bc","Type":"ContainerStarted","Data":"3826c951ee1c462846026ab1f98d1769b86e7fa940c9be8c362c84140d297c72"} Mar 13 01:16:57.060102 master-0 kubenswrapper[7599]: I0313 01:16:57.060040 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerStarted","Data":"0cfdb95efdc8432bdd4633516711c41c3cb5e31aacb0fb3f7ab64226c6ff685f"} Mar 13 01:16:57.063453 master-0 kubenswrapper[7599]: I0313 01:16:57.063396 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nkp" event={"ID":"6da2aac0-42a0-45c2-93ec-b148f5889e8b","Type":"ContainerDied","Data":"e494bdc5d34f6d35be15c841021162373cc2a0a39223427d66e514de073d9457"} Mar 13 01:16:57.064584 master-0 kubenswrapper[7599]: I0313 01:16:57.063345 7599 generic.go:334] "Generic (PLEG): container finished" podID="6da2aac0-42a0-45c2-93ec-b148f5889e8b" containerID="e494bdc5d34f6d35be15c841021162373cc2a0a39223427d66e514de073d9457" exitCode=0 Mar 13 01:16:57.067133 master-0 kubenswrapper[7599]: I0313 01:16:57.065991 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cx58l" event={"ID":"fb5dee36-70a4-47a4-afc2-d3209a476362","Type":"ContainerStarted","Data":"60159a917a34f7d64b3ba3a186dff388b89b7011483106eb857811a35e9e0fbb"} Mar 13 01:16:57.086180 master-0 kubenswrapper[7599]: I0313 01:16:57.085064 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" podStartSLOduration=205.858137161 podStartE2EDuration="3m31.085030736s" podCreationTimestamp="2026-03-13 01:13:26 +0000 UTC" firstStartedPulling="2026-03-13 01:16:51.447324028 +0000 UTC m=+270.719003422" lastFinishedPulling="2026-03-13 01:16:56.674217603 +0000 UTC m=+275.945896997" observedRunningTime="2026-03-13 01:16:57.080823884 +0000 UTC m=+276.352503278" watchObservedRunningTime="2026-03-13 01:16:57.085030736 +0000 UTC m=+276.356710130" Mar 13 01:16:57.164120 master-0 kubenswrapper[7599]: I0313 01:16:57.162178 7599 scope.go:117] "RemoveContainer" containerID="d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f" Mar 13 01:16:57.164120 master-0 kubenswrapper[7599]: I0313 01:16:57.163503 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" podStartSLOduration=204.717827828 podStartE2EDuration="3m30.163476822s" podCreationTimestamp="2026-03-13 01:13:27 +0000 UTC" firstStartedPulling="2026-03-13 01:16:51.223349323 +0000 UTC m=+270.495028717" lastFinishedPulling="2026-03-13 01:16:56.668998317 +0000 UTC m=+275.940677711" observedRunningTime="2026-03-13 01:16:57.122830079 +0000 UTC m=+276.394509483" watchObservedRunningTime="2026-03-13 01:16:57.163476822 +0000 UTC m=+276.435156216" Mar 13 01:16:57.192017 master-0 kubenswrapper[7599]: I0313 01:16:57.191924 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mqtr"] Mar 13 01:16:57.195627 master-0 kubenswrapper[7599]: I0313 01:16:57.194813 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7mqtr"] Mar 13 01:16:57.213901 master-0 kubenswrapper[7599]: I0313 01:16:57.211429 7599 scope.go:117] "RemoveContainer" containerID="39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87" Mar 13 01:16:57.248790 master-0 kubenswrapper[7599]: I0313 01:16:57.248745 7599 scope.go:117] "RemoveContainer" containerID="b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096" Mar 13 01:16:57.251009 master-0 kubenswrapper[7599]: E0313 01:16:57.250065 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096\": container with ID starting with b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096 not found: ID does not exist" containerID="b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096" Mar 13 01:16:57.251009 master-0 kubenswrapper[7599]: I0313 01:16:57.250118 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096"} err="failed to get container status \"b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096\": rpc error: code = NotFound desc = could not find container \"b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096\": container with ID starting with b15586729cdabaa08d51b3174225c84244bf0ccb4a23f2d046b7b2a054e75096 not found: ID does not exist" Mar 13 01:16:57.251009 master-0 kubenswrapper[7599]: I0313 01:16:57.250158 7599 scope.go:117] "RemoveContainer" containerID="d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f" Mar 13 01:16:57.254810 master-0 kubenswrapper[7599]: E0313 01:16:57.254759 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f\": container with ID starting with d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f not found: ID does not exist" containerID="d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f" Mar 13 01:16:57.254879 master-0 kubenswrapper[7599]: I0313 01:16:57.254820 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f"} err="failed to get container status \"d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f\": rpc error: code = NotFound desc = could not find container \"d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f\": container with ID starting with d40b5812dac6b8bccae1637f40310abe862d934cc6dbbadf6b000e58c2cf4c8f not found: ID does not exist" Mar 13 01:16:57.254879 master-0 kubenswrapper[7599]: I0313 01:16:57.254857 7599 scope.go:117] "RemoveContainer" containerID="39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87" Mar 13 01:16:57.259116 master-0 kubenswrapper[7599]: E0313 01:16:57.258653 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87\": container with ID starting with 39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87 not found: ID does not exist" containerID="39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87" Mar 13 01:16:57.259116 master-0 kubenswrapper[7599]: I0313 01:16:57.258708 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87"} err="failed to get container status \"39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87\": rpc error: code = NotFound desc = could not find container \"39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87\": container with ID starting with 39f9ca67b1dec2d73a8b330be60e578732d0d1aca0801e59eb11ec9f0c931a87 not found: ID does not exist" Mar 13 01:16:58.074862 master-0 kubenswrapper[7599]: I0313 01:16:58.074805 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" event={"ID":"56e20b21-ba17-46ae-a740-0e7bd45eae5f","Type":"ContainerStarted","Data":"08915b60146d98d7efb6d41a6c922970c9b802ffad2670270c869858e2667b72"} Mar 13 01:16:58.078271 master-0 kubenswrapper[7599]: I0313 01:16:58.078213 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" event={"ID":"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0","Type":"ContainerStarted","Data":"3b78076191c6ff5862685fb36cdf2787c0641062c563324e3bc9a0b189ad5e4c"} Mar 13 01:16:58.078271 master-0 kubenswrapper[7599]: I0313 01:16:58.078264 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" event={"ID":"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0","Type":"ContainerStarted","Data":"0ec84938ff9140f8fa534f766eb03e4fb8c9c27783df2fbaec36ea548e9c6726"} Mar 13 01:16:58.079897 master-0 kubenswrapper[7599]: I0313 01:16:58.079856 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" event={"ID":"6e799871-735a-44e8-8193-24c5bb388928","Type":"ContainerStarted","Data":"49a88adeccd3ed4606c86367088922f6996546a7c16b7f7a98e260c50e585e6b"} Mar 13 01:16:58.081452 master-0 kubenswrapper[7599]: I0313 01:16:58.081420 7599 generic.go:334] "Generic (PLEG): container finished" podID="fb5dee36-70a4-47a4-afc2-d3209a476362" containerID="8b167e4b932b64d1bd8542773273ff5f0d06008ccdbf22a27a549d7fe3c912eb" exitCode=0 Mar 13 01:16:58.081542 master-0 kubenswrapper[7599]: I0313 01:16:58.081501 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cx58l" event={"ID":"fb5dee36-70a4-47a4-afc2-d3209a476362","Type":"ContainerDied","Data":"8b167e4b932b64d1bd8542773273ff5f0d06008ccdbf22a27a549d7fe3c912eb"} Mar 13 01:16:58.085173 master-0 kubenswrapper[7599]: I0313 01:16:58.084422 7599 generic.go:334] "Generic (PLEG): container finished" podID="9863f7ff-4c8d-42a3-a822-01697cf9c920" containerID="3f043b4a215a970a593ef894cb43fbc8629b221e80d790f74a2607306302a1c4" exitCode=0 Mar 13 01:16:58.085173 master-0 kubenswrapper[7599]: I0313 01:16:58.084472 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64xrl" event={"ID":"9863f7ff-4c8d-42a3-a822-01697cf9c920","Type":"ContainerDied","Data":"3f043b4a215a970a593ef894cb43fbc8629b221e80d790f74a2607306302a1c4"} Mar 13 01:16:58.089274 master-0 kubenswrapper[7599]: I0313 01:16:58.089204 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerStarted","Data":"d060cefc67bacf4ab2a22d4dc70562fbf9cf3802cb02b0af1c0ec384224603d8"} Mar 13 01:16:58.091554 master-0 kubenswrapper[7599]: I0313 01:16:58.091448 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" podStartSLOduration=220.985875419 podStartE2EDuration="3m45.091423075s" podCreationTimestamp="2026-03-13 01:13:13 +0000 UTC" firstStartedPulling="2026-03-13 01:16:52.712835773 +0000 UTC m=+271.984515167" lastFinishedPulling="2026-03-13 01:16:56.818383429 +0000 UTC m=+276.090062823" observedRunningTime="2026-03-13 01:16:58.091411305 +0000 UTC m=+277.363090709" watchObservedRunningTime="2026-03-13 01:16:58.091423075 +0000 UTC m=+277.363102469" Mar 13 01:16:58.101749 master-0 kubenswrapper[7599]: I0313 01:16:58.097083 7599 generic.go:334] "Generic (PLEG): container finished" podID="9d2f93bd-e4ce-4ed2-b249-946338f753ed" containerID="0e8798fe2e8ef33cc2b91fe39e59f52189be2b65c2d2ed1095f875a54002ee95" exitCode=0 Mar 13 01:16:58.101749 master-0 kubenswrapper[7599]: I0313 01:16:58.097133 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zglhp" event={"ID":"9d2f93bd-e4ce-4ed2-b249-946338f753ed","Type":"ContainerDied","Data":"0e8798fe2e8ef33cc2b91fe39e59f52189be2b65c2d2ed1095f875a54002ee95"} Mar 13 01:16:58.101749 master-0 kubenswrapper[7599]: I0313 01:16:58.097164 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zglhp" event={"ID":"9d2f93bd-e4ce-4ed2-b249-946338f753ed","Type":"ContainerStarted","Data":"d243e098a2bf2092df86880b77adaed46c59e61e072be24c44913d8532c87256"} Mar 13 01:16:58.104765 master-0 kubenswrapper[7599]: I0313 01:16:58.104706 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d9nkp" event={"ID":"6da2aac0-42a0-45c2-93ec-b148f5889e8b","Type":"ContainerStarted","Data":"29a4dac76ec541c293a6ce4e39639fc49f78b2eda67e42e686b9117f339c9648"} Mar 13 01:16:58.174072 master-0 kubenswrapper[7599]: I0313 01:16:58.173981 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" podStartSLOduration=206.966523896 podStartE2EDuration="3m32.173952881s" podCreationTimestamp="2026-03-13 01:13:26 +0000 UTC" firstStartedPulling="2026-03-13 01:16:51.467162447 +0000 UTC m=+270.738841841" lastFinishedPulling="2026-03-13 01:16:56.674591432 +0000 UTC m=+275.946270826" observedRunningTime="2026-03-13 01:16:58.16768416 +0000 UTC m=+277.439363614" watchObservedRunningTime="2026-03-13 01:16:58.173952881 +0000 UTC m=+277.445632305" Mar 13 01:16:58.187700 master-0 kubenswrapper[7599]: I0313 01:16:58.187612 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" podStartSLOduration=207.370031713 podStartE2EDuration="3m32.187587671s" podCreationTimestamp="2026-03-13 01:13:26 +0000 UTC" firstStartedPulling="2026-03-13 01:16:51.851371737 +0000 UTC m=+271.123051131" lastFinishedPulling="2026-03-13 01:16:56.668927695 +0000 UTC m=+275.940607089" observedRunningTime="2026-03-13 01:16:58.185283015 +0000 UTC m=+277.456962419" watchObservedRunningTime="2026-03-13 01:16:58.187587671 +0000 UTC m=+277.459267105" Mar 13 01:16:58.213487 master-0 kubenswrapper[7599]: I0313 01:16:58.213386 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d9nkp" podStartSLOduration=2.513442541 podStartE2EDuration="8.213363233s" podCreationTimestamp="2026-03-13 01:16:50 +0000 UTC" firstStartedPulling="2026-03-13 01:16:51.89785501 +0000 UTC m=+271.169534404" lastFinishedPulling="2026-03-13 01:16:57.597775702 +0000 UTC m=+276.869455096" observedRunningTime="2026-03-13 01:16:58.210430842 +0000 UTC m=+277.482110246" watchObservedRunningTime="2026-03-13 01:16:58.213363233 +0000 UTC m=+277.485042627" Mar 13 01:16:59.001432 master-0 kubenswrapper[7599]: I0313 01:16:59.001320 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" path="/var/lib/kubelet/pods/9992615a-c49b-4ef0-b02b-c6cd2e719fa3/volumes" Mar 13 01:16:59.114335 master-0 kubenswrapper[7599]: I0313 01:16:59.114263 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cx58l" event={"ID":"fb5dee36-70a4-47a4-afc2-d3209a476362","Type":"ContainerStarted","Data":"61bf0fbf4501061e78c007eaf05936de96edb76fe74c0218e6d72868ece9ed9a"} Mar 13 01:16:59.121174 master-0 kubenswrapper[7599]: I0313 01:16:59.121110 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-64xrl" event={"ID":"9863f7ff-4c8d-42a3-a822-01697cf9c920","Type":"ContainerStarted","Data":"d049d04af3108771674fb31be72f77a3c473d2a4cf8b78458a3f0030fb06c0dd"} Mar 13 01:16:59.187555 master-0 kubenswrapper[7599]: I0313 01:16:59.183576 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-64xrl" podStartSLOduration=4.125758878 podStartE2EDuration="8.183549129s" podCreationTimestamp="2026-03-13 01:16:51 +0000 UTC" firstStartedPulling="2026-03-13 01:16:54.552502379 +0000 UTC m=+273.824181773" lastFinishedPulling="2026-03-13 01:16:58.61029262 +0000 UTC m=+277.881972024" observedRunningTime="2026-03-13 01:16:59.18109253 +0000 UTC m=+278.452771934" watchObservedRunningTime="2026-03-13 01:16:59.183549129 +0000 UTC m=+278.455228533" Mar 13 01:17:00.818550 master-0 kubenswrapper[7599]: I0313 01:17:00.818308 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:00.818550 master-0 kubenswrapper[7599]: I0313 01:17:00.818395 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:01.877486 master-0 kubenswrapper[7599]: I0313 01:17:01.877409 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d9nkp" podUID="6da2aac0-42a0-45c2-93ec-b148f5889e8b" containerName="registry-server" probeResult="failure" output=< Mar 13 01:17:01.877486 master-0 kubenswrapper[7599]: timeout: failed to connect service ":50051" within 1s Mar 13 01:17:01.877486 master-0 kubenswrapper[7599]: > Mar 13 01:17:02.211315 master-0 kubenswrapper[7599]: I0313 01:17:02.211137 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:02.211315 master-0 kubenswrapper[7599]: I0313 01:17:02.211201 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:02.262140 master-0 kubenswrapper[7599]: I0313 01:17:02.262062 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:03.166855 master-0 kubenswrapper[7599]: I0313 01:17:03.166776 7599 generic.go:334] "Generic (PLEG): container finished" podID="fb5dee36-70a4-47a4-afc2-d3209a476362" containerID="61bf0fbf4501061e78c007eaf05936de96edb76fe74c0218e6d72868ece9ed9a" exitCode=0 Mar 13 01:17:03.166855 master-0 kubenswrapper[7599]: I0313 01:17:03.166830 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cx58l" event={"ID":"fb5dee36-70a4-47a4-afc2-d3209a476362","Type":"ContainerDied","Data":"61bf0fbf4501061e78c007eaf05936de96edb76fe74c0218e6d72868ece9ed9a"} Mar 13 01:17:03.222976 master-0 kubenswrapper[7599]: I0313 01:17:03.222933 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:04.180594 master-0 kubenswrapper[7599]: I0313 01:17:04.180350 7599 generic.go:334] "Generic (PLEG): container finished" podID="9d2f93bd-e4ce-4ed2-b249-946338f753ed" containerID="85929f4bdc709951d2ed40828c44291860167df639f2be4b11644838c712256b" exitCode=0 Mar 13 01:17:04.180594 master-0 kubenswrapper[7599]: I0313 01:17:04.180417 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zglhp" event={"ID":"9d2f93bd-e4ce-4ed2-b249-946338f753ed","Type":"ContainerDied","Data":"85929f4bdc709951d2ed40828c44291860167df639f2be4b11644838c712256b"} Mar 13 01:17:07.203034 master-0 kubenswrapper[7599]: I0313 01:17:07.202962 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" event={"ID":"65dd1dc7-1b90-40f6-82c9-dee90a1fa852","Type":"ContainerStarted","Data":"467d27ef8c4875900554f10b1b546a29313efa62bd2829ca598e7a0fd64c5e96"} Mar 13 01:17:07.208408 master-0 kubenswrapper[7599]: I0313 01:17:07.208162 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zglhp" event={"ID":"9d2f93bd-e4ce-4ed2-b249-946338f753ed","Type":"ContainerStarted","Data":"de27b758e217cf6381494010f1e56c943113961e6d8ad244cbf79b62a60f0d2a"} Mar 13 01:17:07.211482 master-0 kubenswrapper[7599]: I0313 01:17:07.211426 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cx58l" event={"ID":"fb5dee36-70a4-47a4-afc2-d3209a476362","Type":"ContainerStarted","Data":"a43d25b9f853353102098349bf9bc6f69e72e7bd55069b4ba825e5aa54b9d322"} Mar 13 01:17:07.237942 master-0 kubenswrapper[7599]: I0313 01:17:07.237834 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" podStartSLOduration=208.245580264 podStartE2EDuration="3m42.2378084s" podCreationTimestamp="2026-03-13 01:13:25 +0000 UTC" firstStartedPulling="2026-03-13 01:16:52.902096619 +0000 UTC m=+272.173776013" lastFinishedPulling="2026-03-13 01:17:06.894324745 +0000 UTC m=+286.166004149" observedRunningTime="2026-03-13 01:17:07.236387075 +0000 UTC m=+286.508066469" watchObservedRunningTime="2026-03-13 01:17:07.2378084 +0000 UTC m=+286.509487794" Mar 13 01:17:07.253034 master-0 kubenswrapper[7599]: I0313 01:17:07.252983 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/2.log" Mar 13 01:17:07.261414 master-0 kubenswrapper[7599]: I0313 01:17:07.261344 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cx58l" podStartSLOduration=4.467037436 podStartE2EDuration="13.261325078s" podCreationTimestamp="2026-03-13 01:16:54 +0000 UTC" firstStartedPulling="2026-03-13 01:16:58.082764326 +0000 UTC m=+277.354443720" lastFinishedPulling="2026-03-13 01:17:06.877051968 +0000 UTC m=+286.148731362" observedRunningTime="2026-03-13 01:17:07.259834952 +0000 UTC m=+286.531514346" watchObservedRunningTime="2026-03-13 01:17:07.261325078 +0000 UTC m=+286.533004472" Mar 13 01:17:07.285881 master-0 kubenswrapper[7599]: I0313 01:17:07.285801 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zglhp" podStartSLOduration=5.524543996 podStartE2EDuration="14.285782329s" podCreationTimestamp="2026-03-13 01:16:53 +0000 UTC" firstStartedPulling="2026-03-13 01:16:58.099597213 +0000 UTC m=+277.371276647" lastFinishedPulling="2026-03-13 01:17:06.860835556 +0000 UTC m=+286.132514980" observedRunningTime="2026-03-13 01:17:07.283706999 +0000 UTC m=+286.555386393" watchObservedRunningTime="2026-03-13 01:17:07.285782329 +0000 UTC m=+286.557461723" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: I0313 01:17:07.448878 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk"] Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: E0313 01:17:07.449288 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="extract-utilities" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: I0313 01:17:07.449306 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="extract-utilities" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: E0313 01:17:07.449332 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="extract-content" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: I0313 01:17:07.449341 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="extract-content" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: E0313 01:17:07.449359 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="registry-server" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: I0313 01:17:07.449369 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="registry-server" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: I0313 01:17:07.449501 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="9992615a-c49b-4ef0-b02b-c6cd2e719fa3" containerName="registry-server" Mar 13 01:17:07.457238 master-0 kubenswrapper[7599]: I0313 01:17:07.450339 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.457787 master-0 kubenswrapper[7599]: I0313 01:17:07.457491 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 01:17:07.457787 master-0 kubenswrapper[7599]: I0313 01:17:07.457623 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 01:17:07.457787 master-0 kubenswrapper[7599]: I0313 01:17:07.457742 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 01:17:07.457937 master-0 kubenswrapper[7599]: I0313 01:17:07.457904 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 01:17:07.457937 master-0 kubenswrapper[7599]: I0313 01:17:07.457926 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-znq86" Mar 13 01:17:07.458846 master-0 kubenswrapper[7599]: I0313 01:17:07.458324 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 01:17:07.470559 master-0 kubenswrapper[7599]: I0313 01:17:07.466586 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/3.log" Mar 13 01:17:07.470559 master-0 kubenswrapper[7599]: I0313 01:17:07.468167 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn"] Mar 13 01:17:07.470559 master-0 kubenswrapper[7599]: I0313 01:17:07.468439 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="kube-rbac-proxy" containerID="cri-o://43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9" gracePeriod=30 Mar 13 01:17:07.470559 master-0 kubenswrapper[7599]: I0313 01:17:07.468598 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="machine-approver-controller" containerID="cri-o://0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67" gracePeriod=30 Mar 13 01:17:07.477562 master-0 kubenswrapper[7599]: I0313 01:17:07.477142 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk"] Mar 13 01:17:07.527420 master-0 kubenswrapper[7599]: I0313 01:17:07.527382 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj"] Mar 13 01:17:07.531545 master-0 kubenswrapper[7599]: I0313 01:17:07.529566 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.535984 master-0 kubenswrapper[7599]: I0313 01:17:07.535923 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 01:17:07.537438 master-0 kubenswrapper[7599]: I0313 01:17:07.536150 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:17:07.537438 master-0 kubenswrapper[7599]: I0313 01:17:07.536362 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hzxsb" Mar 13 01:17:07.537438 master-0 kubenswrapper[7599]: I0313 01:17:07.536624 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:17:07.537438 master-0 kubenswrapper[7599]: I0313 01:17:07.536750 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 01:17:07.537438 master-0 kubenswrapper[7599]: I0313 01:17:07.536883 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 01:17:07.537628 master-0 kubenswrapper[7599]: I0313 01:17:07.537598 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.537950 master-0 kubenswrapper[7599]: I0313 01:17:07.537657 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sd26j\" (UniqueName: \"kubernetes.io/projected/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-kube-api-access-sd26j\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.537950 master-0 kubenswrapper[7599]: I0313 01:17:07.537757 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.537950 master-0 kubenswrapper[7599]: I0313 01:17:07.537791 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.540764 master-0 kubenswrapper[7599]: I0313 01:17:07.538641 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb"] Mar 13 01:17:07.543948 master-0 kubenswrapper[7599]: I0313 01:17:07.543909 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.544649 master-0 kubenswrapper[7599]: I0313 01:17:07.544591 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2"] Mar 13 01:17:07.547016 master-0 kubenswrapper[7599]: I0313 01:17:07.545813 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.547016 master-0 kubenswrapper[7599]: I0313 01:17:07.546359 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 01:17:07.547016 master-0 kubenswrapper[7599]: I0313 01:17:07.546569 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-zlp9s" Mar 13 01:17:07.547016 master-0 kubenswrapper[7599]: I0313 01:17:07.546779 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 01:17:07.551577 master-0 kubenswrapper[7599]: I0313 01:17:07.548772 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 01:17:07.551577 master-0 kubenswrapper[7599]: I0313 01:17:07.548999 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 01:17:07.559352 master-0 kubenswrapper[7599]: I0313 01:17:07.559244 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jgxk7" Mar 13 01:17:07.603227 master-0 kubenswrapper[7599]: I0313 01:17:07.597303 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb"] Mar 13 01:17:07.607227 master-0 kubenswrapper[7599]: I0313 01:17:07.604624 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2"] Mar 13 01:17:07.639244 master-0 kubenswrapper[7599]: I0313 01:17:07.639184 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.639244 master-0 kubenswrapper[7599]: I0313 01:17:07.639238 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14464536-4f17-4d6f-8867-d68e84bf1b4d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639268 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639293 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639328 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639360 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4rhp\" (UniqueName: \"kubernetes.io/projected/14464536-4f17-4d6f-8867-d68e84bf1b4d-kube-api-access-s4rhp\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639380 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639400 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639428 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd26j\" (UniqueName: \"kubernetes.io/projected/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-kube-api-access-sd26j\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639453 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.639479 master-0 kubenswrapper[7599]: I0313 01:17:07.639472 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14464536-4f17-4d6f-8867-d68e84bf1b4d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.639762 master-0 kubenswrapper[7599]: I0313 01:17:07.639496 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.639762 master-0 kubenswrapper[7599]: I0313 01:17:07.639541 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.639762 master-0 kubenswrapper[7599]: I0313 01:17:07.639569 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca06fac5-6707-4521-88ce-1768fede42c2-tmpfs\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.639762 master-0 kubenswrapper[7599]: I0313 01:17:07.639590 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.639762 master-0 kubenswrapper[7599]: I0313 01:17:07.639615 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4lqgs\" (UniqueName: \"kubernetes.io/projected/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-kube-api-access-4lqgs\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.639762 master-0 kubenswrapper[7599]: I0313 01:17:07.639641 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2pt2w\" (UniqueName: \"kubernetes.io/projected/ca06fac5-6707-4521-88ce-1768fede42c2-kube-api-access-2pt2w\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.642646 master-0 kubenswrapper[7599]: I0313 01:17:07.641201 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.645261 master-0 kubenswrapper[7599]: I0313 01:17:07.645216 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.645642 master-0 kubenswrapper[7599]: I0313 01:17:07.645608 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.666408 master-0 kubenswrapper[7599]: I0313 01:17:07.664228 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd26j\" (UniqueName: \"kubernetes.io/projected/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-kube-api-access-sd26j\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.677138 master-0 kubenswrapper[7599]: I0313 01:17:07.677071 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:17:07.741330 master-0 kubenswrapper[7599]: I0313 01:17:07.741259 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eec92350-c2e5-4223-82fe-2c3f78c7945f-machine-approver-tls\") pod \"eec92350-c2e5-4223-82fe-2c3f78c7945f\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " Mar 13 01:17:07.741599 master-0 kubenswrapper[7599]: I0313 01:17:07.741376 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-config\") pod \"eec92350-c2e5-4223-82fe-2c3f78c7945f\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " Mar 13 01:17:07.741599 master-0 kubenswrapper[7599]: I0313 01:17:07.741419 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-auth-proxy-config\") pod \"eec92350-c2e5-4223-82fe-2c3f78c7945f\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " Mar 13 01:17:07.741599 master-0 kubenswrapper[7599]: I0313 01:17:07.741505 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9jfk\" (UniqueName: \"kubernetes.io/projected/eec92350-c2e5-4223-82fe-2c3f78c7945f-kube-api-access-f9jfk\") pod \"eec92350-c2e5-4223-82fe-2c3f78c7945f\" (UID: \"eec92350-c2e5-4223-82fe-2c3f78c7945f\") " Mar 13 01:17:07.742891 master-0 kubenswrapper[7599]: I0313 01:17:07.742619 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lqgs\" (UniqueName: \"kubernetes.io/projected/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-kube-api-access-4lqgs\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.743133 master-0 kubenswrapper[7599]: I0313 01:17:07.743104 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pt2w\" (UniqueName: \"kubernetes.io/projected/ca06fac5-6707-4521-88ce-1768fede42c2-kube-api-access-2pt2w\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.743383 master-0 kubenswrapper[7599]: I0313 01:17:07.743353 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.743498 master-0 kubenswrapper[7599]: I0313 01:17:07.743453 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14464536-4f17-4d6f-8867-d68e84bf1b4d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.743589 master-0 kubenswrapper[7599]: I0313 01:17:07.743500 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.743866 master-0 kubenswrapper[7599]: I0313 01:17:07.743601 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.744139 master-0 kubenswrapper[7599]: I0313 01:17:07.743895 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s4rhp\" (UniqueName: \"kubernetes.io/projected/14464536-4f17-4d6f-8867-d68e84bf1b4d-kube-api-access-s4rhp\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.744195 master-0 kubenswrapper[7599]: I0313 01:17:07.744152 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.744397 master-0 kubenswrapper[7599]: I0313 01:17:07.744365 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.744690 master-0 kubenswrapper[7599]: I0313 01:17:07.744653 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.744922 master-0 kubenswrapper[7599]: I0313 01:17:07.744888 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14464536-4f17-4d6f-8867-d68e84bf1b4d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.745211 master-0 kubenswrapper[7599]: I0313 01:17:07.745177 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.745479 master-0 kubenswrapper[7599]: I0313 01:17:07.745453 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca06fac5-6707-4521-88ce-1768fede42c2-tmpfs\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.748455 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.749702 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.750028 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.752206 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "eec92350-c2e5-4223-82fe-2c3f78c7945f" (UID: "eec92350-c2e5-4223-82fe-2c3f78c7945f"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.756454 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-images\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.757444 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14464536-4f17-4d6f-8867-d68e84bf1b4d-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.757648 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca06fac5-6707-4521-88ce-1768fede42c2-tmpfs\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.759315 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.759930 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.760167 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14464536-4f17-4d6f-8867-d68e84bf1b4d-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.761081 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eec92350-c2e5-4223-82fe-2c3f78c7945f-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "eec92350-c2e5-4223-82fe-2c3f78c7945f" (UID: "eec92350-c2e5-4223-82fe-2c3f78c7945f"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:17:07.767550 master-0 kubenswrapper[7599]: I0313 01:17:07.767091 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-config" (OuterVolumeSpecName: "config") pod "eec92350-c2e5-4223-82fe-2c3f78c7945f" (UID: "eec92350-c2e5-4223-82fe-2c3f78c7945f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:17:07.772535 master-0 kubenswrapper[7599]: I0313 01:17:07.770772 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.778083 master-0 kubenswrapper[7599]: I0313 01:17:07.773484 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lqgs\" (UniqueName: \"kubernetes.io/projected/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-kube-api-access-4lqgs\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.778083 master-0 kubenswrapper[7599]: I0313 01:17:07.776721 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec92350-c2e5-4223-82fe-2c3f78c7945f-kube-api-access-f9jfk" (OuterVolumeSpecName: "kube-api-access-f9jfk") pod "eec92350-c2e5-4223-82fe-2c3f78c7945f" (UID: "eec92350-c2e5-4223-82fe-2c3f78c7945f"). InnerVolumeSpecName "kube-api-access-f9jfk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:17:07.790560 master-0 kubenswrapper[7599]: I0313 01:17:07.787547 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:07.790560 master-0 kubenswrapper[7599]: I0313 01:17:07.787632 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s4rhp\" (UniqueName: \"kubernetes.io/projected/14464536-4f17-4d6f-8867-d68e84bf1b4d-kube-api-access-s4rhp\") pod \"cluster-cloud-controller-manager-operator-559568b945-rt5bj\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.806976 master-0 kubenswrapper[7599]: I0313 01:17:07.806919 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pt2w\" (UniqueName: \"kubernetes.io/projected/ca06fac5-6707-4521-88ce-1768fede42c2-kube-api-access-2pt2w\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:07.852536 master-0 kubenswrapper[7599]: I0313 01:17:07.848104 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c84d45cdc-rj5st_536a2de1-e13c-47d1-b61d-88e0a5fd2851/fix-audit-permissions/0.log" Mar 13 01:17:07.859695 master-0 kubenswrapper[7599]: I0313 01:17:07.859656 7599 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/eec92350-c2e5-4223-82fe-2c3f78c7945f-machine-approver-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:07.859876 master-0 kubenswrapper[7599]: I0313 01:17:07.859864 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:07.859942 master-0 kubenswrapper[7599]: I0313 01:17:07.859932 7599 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/eec92350-c2e5-4223-82fe-2c3f78c7945f-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:07.859999 master-0 kubenswrapper[7599]: I0313 01:17:07.859990 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9jfk\" (UniqueName: \"kubernetes.io/projected/eec92350-c2e5-4223-82fe-2c3f78c7945f-kube-api-access-f9jfk\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:07.930632 master-0 kubenswrapper[7599]: I0313 01:17:07.930587 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:07.946932 master-0 kubenswrapper[7599]: W0313 01:17:07.946149 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14464536_4f17_4d6f_8867_d68e84bf1b4d.slice/crio-7d7e623e8c0d9e066e1623241bc5f63e9d1f8ed656f8cc7a2cd92ed153ee3235 WatchSource:0}: Error finding container 7d7e623e8c0d9e066e1623241bc5f63e9d1f8ed656f8cc7a2cd92ed153ee3235: Status 404 returned error can't find the container with id 7d7e623e8c0d9e066e1623241bc5f63e9d1f8ed656f8cc7a2cd92ed153ee3235 Mar 13 01:17:07.955266 master-0 kubenswrapper[7599]: I0313 01:17:07.954601 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:07.973657 master-0 kubenswrapper[7599]: I0313 01:17:07.973595 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:08.045168 master-0 kubenswrapper[7599]: I0313 01:17:08.045031 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c84d45cdc-rj5st_536a2de1-e13c-47d1-b61d-88e0a5fd2851/oauth-apiserver/0.log" Mar 13 01:17:08.233321 master-0 kubenswrapper[7599]: I0313 01:17:08.233259 7599 generic.go:334] "Generic (PLEG): container finished" podID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerID="0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67" exitCode=0 Mar 13 01:17:08.233832 master-0 kubenswrapper[7599]: I0313 01:17:08.233802 7599 generic.go:334] "Generic (PLEG): container finished" podID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerID="43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9" exitCode=0 Mar 13 01:17:08.233897 master-0 kubenswrapper[7599]: I0313 01:17:08.233868 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" event={"ID":"eec92350-c2e5-4223-82fe-2c3f78c7945f","Type":"ContainerDied","Data":"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67"} Mar 13 01:17:08.233952 master-0 kubenswrapper[7599]: I0313 01:17:08.233905 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" event={"ID":"eec92350-c2e5-4223-82fe-2c3f78c7945f","Type":"ContainerDied","Data":"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9"} Mar 13 01:17:08.233952 master-0 kubenswrapper[7599]: I0313 01:17:08.233919 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" event={"ID":"eec92350-c2e5-4223-82fe-2c3f78c7945f","Type":"ContainerDied","Data":"84da267170c9e91b410d7e9d9438b6c48844d88a0a7765f16ae9587a89797c0b"} Mar 13 01:17:08.233952 master-0 kubenswrapper[7599]: I0313 01:17:08.233936 7599 scope.go:117] "RemoveContainer" containerID="0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67" Mar 13 01:17:08.234077 master-0 kubenswrapper[7599]: I0313 01:17:08.234053 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn" Mar 13 01:17:08.255301 master-0 kubenswrapper[7599]: I0313 01:17:08.255254 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9_2581e5b5-8cbb-4fa5-9888-98fb572a6232/kube-rbac-proxy/0.log" Mar 13 01:17:08.256998 master-0 kubenswrapper[7599]: I0313 01:17:08.256964 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerStarted","Data":"7d7e623e8c0d9e066e1623241bc5f63e9d1f8ed656f8cc7a2cd92ed153ee3235"} Mar 13 01:17:08.272642 master-0 kubenswrapper[7599]: I0313 01:17:08.272595 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk"] Mar 13 01:17:08.277943 master-0 kubenswrapper[7599]: I0313 01:17:08.277907 7599 scope.go:117] "RemoveContainer" containerID="43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9" Mar 13 01:17:08.292718 master-0 kubenswrapper[7599]: I0313 01:17:08.292672 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn"] Mar 13 01:17:08.293034 master-0 kubenswrapper[7599]: I0313 01:17:08.292997 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cluster-machine-approver/machine-approver-955fcfb87-56dsn"] Mar 13 01:17:08.314529 master-0 kubenswrapper[7599]: I0313 01:17:08.311254 7599 scope.go:117] "RemoveContainer" containerID="0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67" Mar 13 01:17:08.317592 master-0 kubenswrapper[7599]: E0313 01:17:08.315032 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67\": container with ID starting with 0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67 not found: ID does not exist" containerID="0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67" Mar 13 01:17:08.317592 master-0 kubenswrapper[7599]: I0313 01:17:08.315067 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67"} err="failed to get container status \"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67\": rpc error: code = NotFound desc = could not find container \"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67\": container with ID starting with 0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67 not found: ID does not exist" Mar 13 01:17:08.317592 master-0 kubenswrapper[7599]: I0313 01:17:08.315095 7599 scope.go:117] "RemoveContainer" containerID="43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9" Mar 13 01:17:08.317592 master-0 kubenswrapper[7599]: E0313 01:17:08.315483 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9\": container with ID starting with 43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9 not found: ID does not exist" containerID="43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9" Mar 13 01:17:08.317592 master-0 kubenswrapper[7599]: I0313 01:17:08.315596 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9"} err="failed to get container status \"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9\": rpc error: code = NotFound desc = could not find container \"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9\": container with ID starting with 43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9 not found: ID does not exist" Mar 13 01:17:08.317592 master-0 kubenswrapper[7599]: I0313 01:17:08.315648 7599 scope.go:117] "RemoveContainer" containerID="0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67" Mar 13 01:17:08.334243 master-0 kubenswrapper[7599]: I0313 01:17:08.332879 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67"} err="failed to get container status \"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67\": rpc error: code = NotFound desc = could not find container \"0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67\": container with ID starting with 0d838329b22c4bf3c11260be3bbd06e6e28d65ff9807b2785fa0ab758cdd0c67 not found: ID does not exist" Mar 13 01:17:08.334243 master-0 kubenswrapper[7599]: I0313 01:17:08.332957 7599 scope.go:117] "RemoveContainer" containerID="43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9" Mar 13 01:17:08.338240 master-0 kubenswrapper[7599]: I0313 01:17:08.338177 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c"] Mar 13 01:17:08.338762 master-0 kubenswrapper[7599]: E0313 01:17:08.338714 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="machine-approver-controller" Mar 13 01:17:08.338762 master-0 kubenswrapper[7599]: I0313 01:17:08.338749 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="machine-approver-controller" Mar 13 01:17:08.338884 master-0 kubenswrapper[7599]: E0313 01:17:08.338806 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="kube-rbac-proxy" Mar 13 01:17:08.338884 master-0 kubenswrapper[7599]: I0313 01:17:08.338817 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="kube-rbac-proxy" Mar 13 01:17:08.341532 master-0 kubenswrapper[7599]: I0313 01:17:08.339013 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="machine-approver-controller" Mar 13 01:17:08.341532 master-0 kubenswrapper[7599]: I0313 01:17:08.339039 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" containerName="kube-rbac-proxy" Mar 13 01:17:08.341532 master-0 kubenswrapper[7599]: I0313 01:17:08.340051 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.344614 master-0 kubenswrapper[7599]: I0313 01:17:08.342213 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9"} err="failed to get container status \"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9\": rpc error: code = NotFound desc = could not find container \"43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9\": container with ID starting with 43f568a04d47a6d2f92deaf7fa44a3976303ad12f95a01962fe3f0429568c9d9 not found: ID does not exist" Mar 13 01:17:08.345215 master-0 kubenswrapper[7599]: I0313 01:17:08.345178 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 01:17:08.345626 master-0 kubenswrapper[7599]: I0313 01:17:08.345531 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 01:17:08.345705 master-0 kubenswrapper[7599]: I0313 01:17:08.345653 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5vwqr" Mar 13 01:17:08.345705 master-0 kubenswrapper[7599]: I0313 01:17:08.345693 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 01:17:08.345888 master-0 kubenswrapper[7599]: I0313 01:17:08.345863 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 01:17:08.345952 master-0 kubenswrapper[7599]: I0313 01:17:08.345903 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 01:17:08.346777 master-0 kubenswrapper[7599]: I0313 01:17:08.346644 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj"] Mar 13 01:17:08.422443 master-0 kubenswrapper[7599]: I0313 01:17:08.422383 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb"] Mar 13 01:17:08.468015 master-0 kubenswrapper[7599]: W0313 01:17:08.467948 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2760a216_fd4b_46d9_a4ec_2d3285ec02bd.slice/crio-d525ab3b1b5859648620b47f3759af91f036909616f6c49b660fe4a797d2c3f0 WatchSource:0}: Error finding container d525ab3b1b5859648620b47f3759af91f036909616f6c49b660fe4a797d2c3f0: Status 404 returned error can't find the container with id d525ab3b1b5859648620b47f3759af91f036909616f6c49b660fe4a797d2c3f0 Mar 13 01:17:08.476479 master-0 kubenswrapper[7599]: I0313 01:17:08.476039 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.476479 master-0 kubenswrapper[7599]: I0313 01:17:08.476144 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvmpk\" (UniqueName: \"kubernetes.io/projected/7e938267-de1f-46f7-bf78-b0b3e810c4fa-kube-api-access-kvmpk\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.476479 master-0 kubenswrapper[7599]: I0313 01:17:08.476216 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.476479 master-0 kubenswrapper[7599]: I0313 01:17:08.476311 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.482949 master-0 kubenswrapper[7599]: I0313 01:17:08.482902 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9_2581e5b5-8cbb-4fa5-9888-98fb572a6232/cluster-autoscaler-operator/0.log" Mar 13 01:17:08.515918 master-0 kubenswrapper[7599]: I0313 01:17:08.515427 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2"] Mar 13 01:17:08.577491 master-0 kubenswrapper[7599]: I0313 01:17:08.577434 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.577641 master-0 kubenswrapper[7599]: I0313 01:17:08.577540 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvmpk\" (UniqueName: \"kubernetes.io/projected/7e938267-de1f-46f7-bf78-b0b3e810c4fa-kube-api-access-kvmpk\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.577641 master-0 kubenswrapper[7599]: I0313 01:17:08.577587 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.578294 master-0 kubenswrapper[7599]: I0313 01:17:08.577639 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.579534 master-0 kubenswrapper[7599]: I0313 01:17:08.579481 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.579926 master-0 kubenswrapper[7599]: I0313 01:17:08.579853 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.585529 master-0 kubenswrapper[7599]: I0313 01:17:08.585456 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.601150 master-0 kubenswrapper[7599]: I0313 01:17:08.601110 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvmpk\" (UniqueName: \"kubernetes.io/projected/7e938267-de1f-46f7-bf78-b0b3e810c4fa-kube-api-access-kvmpk\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.685114 master-0 kubenswrapper[7599]: I0313 01:17:08.684653 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:08.994566 master-0 kubenswrapper[7599]: I0313 01:17:08.994488 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eec92350-c2e5-4223-82fe-2c3f78c7945f" path="/var/lib/kubelet/pods/eec92350-c2e5-4223-82fe-2c3f78c7945f/volumes" Mar 13 01:17:09.262869 master-0 kubenswrapper[7599]: I0313 01:17:09.258907 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/0.log" Mar 13 01:17:09.275527 master-0 kubenswrapper[7599]: I0313 01:17:09.275158 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" event={"ID":"7e938267-de1f-46f7-bf78-b0b3e810c4fa","Type":"ContainerStarted","Data":"be6c496962a8987f21c42524b12c5d8025b66ff294e50520947b2cd7bb0af865"} Mar 13 01:17:09.278656 master-0 kubenswrapper[7599]: I0313 01:17:09.278570 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" event={"ID":"2760a216-fd4b-46d9-a4ec-2d3285ec02bd","Type":"ContainerStarted","Data":"b320b7af2ce92ffe6aac3feb13a31db5b622569dea32f2c41567f5c0cd871cc5"} Mar 13 01:17:09.278656 master-0 kubenswrapper[7599]: I0313 01:17:09.278620 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" event={"ID":"2760a216-fd4b-46d9-a4ec-2d3285ec02bd","Type":"ContainerStarted","Data":"d525ab3b1b5859648620b47f3759af91f036909616f6c49b660fe4a797d2c3f0"} Mar 13 01:17:09.283583 master-0 kubenswrapper[7599]: I0313 01:17:09.282130 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" event={"ID":"dbcb4b80-425a-4dd5-93a8-bb462f641ef1","Type":"ContainerStarted","Data":"127844c2ce9c2bed808bec7110cc4fadfd0105bf2f2bd01f32d23e8cff37c917"} Mar 13 01:17:09.283583 master-0 kubenswrapper[7599]: I0313 01:17:09.282171 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" event={"ID":"dbcb4b80-425a-4dd5-93a8-bb462f641ef1","Type":"ContainerStarted","Data":"f17ab172b3fc00e3c3a0f9da9bed1e16efebdb5c429420e3295dc5cc1f9a7534"} Mar 13 01:17:09.283583 master-0 kubenswrapper[7599]: I0313 01:17:09.282185 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" event={"ID":"dbcb4b80-425a-4dd5-93a8-bb462f641ef1","Type":"ContainerStarted","Data":"97073f9eaab3f9a84928efdbbff240af7a669518355dadabf3d81bed9aec4570"} Mar 13 01:17:09.289763 master-0 kubenswrapper[7599]: I0313 01:17:09.289720 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" event={"ID":"ca06fac5-6707-4521-88ce-1768fede42c2","Type":"ContainerStarted","Data":"5490c0a85f0d4de99cad6a4c695d8f9af2dc89ca90a18d8046940797bb034faf"} Mar 13 01:17:09.289763 master-0 kubenswrapper[7599]: I0313 01:17:09.289761 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" event={"ID":"ca06fac5-6707-4521-88ce-1768fede42c2","Type":"ContainerStarted","Data":"5fc26918eff78c25b88ab7c1476de02488bb5aaefb35f371b1d5f4a9fb66fe67"} Mar 13 01:17:09.290760 master-0 kubenswrapper[7599]: I0313 01:17:09.290408 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:10.290638 master-0 kubenswrapper[7599]: I0313 01:17:10.290585 7599 patch_prober.go:28] interesting pod/packageserver-7877bc66f6-sf5t2 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.128.0.72:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:17:10.291361 master-0 kubenswrapper[7599]: I0313 01:17:10.290652 7599 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" podUID="ca06fac5-6707-4521-88ce-1768fede42c2" containerName="packageserver" probeResult="failure" output="Get \"https://10.128.0.72:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:17:10.313719 master-0 kubenswrapper[7599]: I0313 01:17:10.313627 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" podStartSLOduration=3.31359467 podStartE2EDuration="3.31359467s" podCreationTimestamp="2026-03-13 01:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:10.308219629 +0000 UTC m=+289.579899063" watchObservedRunningTime="2026-03-13 01:17:10.31359467 +0000 UTC m=+289.585274094" Mar 13 01:17:10.318334 master-0 kubenswrapper[7599]: I0313 01:17:10.318202 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" event={"ID":"7e938267-de1f-46f7-bf78-b0b3e810c4fa","Type":"ContainerStarted","Data":"136b01ddc0961d1e803aeaf74058c8cfad8e474f5f85521eb10514e026fc2210"} Mar 13 01:17:10.318334 master-0 kubenswrapper[7599]: I0313 01:17:10.318282 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" event={"ID":"7e938267-de1f-46f7-bf78-b0b3e810c4fa","Type":"ContainerStarted","Data":"bae7a737a9916bf6e75a9e64bc9870fd746bebbcde61cffd2159ed594dff080d"} Mar 13 01:17:10.357593 master-0 kubenswrapper[7599]: I0313 01:17:10.357117 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/baremetal-kube-rbac-proxy/0.log" Mar 13 01:17:10.380432 master-0 kubenswrapper[7599]: I0313 01:17:10.380351 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" podStartSLOduration=3.380332312 podStartE2EDuration="3.380332312s" podCreationTimestamp="2026-03-13 01:17:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:10.379560504 +0000 UTC m=+289.651239918" watchObservedRunningTime="2026-03-13 01:17:10.380332312 +0000 UTC m=+289.652011706" Mar 13 01:17:10.395160 master-0 kubenswrapper[7599]: I0313 01:17:10.395063 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6_56e20b21-ba17-46ae-a740-0e7bd45eae5f/control-plane-machine-set-operator/0.log" Mar 13 01:17:10.424163 master-0 kubenswrapper[7599]: I0313 01:17:10.424091 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-8r87t_77e6cd9e-b6ef-491c-a5c3-60dab81fd752/etcd-operator/2.log" Mar 13 01:17:10.437812 master-0 kubenswrapper[7599]: I0313 01:17:10.437682 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-8r87t_77e6cd9e-b6ef-491c-a5c3-60dab81fd752/etcd-operator/3.log" Mar 13 01:17:10.451184 master-0 kubenswrapper[7599]: I0313 01:17:10.451091 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/setup/0.log" Mar 13 01:17:10.486023 master-0 kubenswrapper[7599]: I0313 01:17:10.485947 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-ensure-env-vars/0.log" Mar 13 01:17:10.505163 master-0 kubenswrapper[7599]: I0313 01:17:10.505049 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-resources-copy/0.log" Mar 13 01:17:10.516233 master-0 kubenswrapper[7599]: I0313 01:17:10.516010 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 01:17:10.560414 master-0 kubenswrapper[7599]: I0313 01:17:10.556946 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 01:17:10.575357 master-0 kubenswrapper[7599]: I0313 01:17:10.571537 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:10.669467 master-0 kubenswrapper[7599]: I0313 01:17:10.669425 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 01:17:10.840263 master-0 kubenswrapper[7599]: I0313 01:17:10.839774 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-readyz/0.log" Mar 13 01:17:10.878487 master-0 kubenswrapper[7599]: I0313 01:17:10.878411 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:10.923041 master-0 kubenswrapper[7599]: I0313 01:17:10.922687 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:11.040039 master-0 kubenswrapper[7599]: I0313 01:17:11.039970 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 01:17:11.249905 master-0 kubenswrapper[7599]: I0313 01:17:11.249844 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_dfb4407e-71fc-4684-aded-cc84f7e306dc/installer/0.log" Mar 13 01:17:11.347711 master-0 kubenswrapper[7599]: I0313 01:17:11.346076 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" podStartSLOduration=3.34605383 podStartE2EDuration="3.34605383s" podCreationTimestamp="2026-03-13 01:17:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:11.344555674 +0000 UTC m=+290.616235078" watchObservedRunningTime="2026-03-13 01:17:11.34605383 +0000 UTC m=+290.617733224" Mar 13 01:17:11.443901 master-0 kubenswrapper[7599]: I0313 01:17:11.443854 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-g8gj5_fde89b0b-7133-4b97-9e35-51c0382bd366/kube-apiserver-operator/0.log" Mar 13 01:17:11.639731 master-0 kubenswrapper[7599]: I0313 01:17:11.639500 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-g8gj5_fde89b0b-7133-4b97-9e35-51c0382bd366/kube-apiserver-operator/1.log" Mar 13 01:17:11.844028 master-0 kubenswrapper[7599]: I0313 01:17:11.843968 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/setup/0.log" Mar 13 01:17:12.043532 master-0 kubenswrapper[7599]: I0313 01:17:12.042572 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver/0.log" Mar 13 01:17:12.245538 master-0 kubenswrapper[7599]: I0313 01:17:12.244443 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_bootstrap-kube-apiserver-master-0_5f77c8e18b751d90bc0dfe2d4e304050/kube-apiserver-insecure-readyz/0.log" Mar 13 01:17:12.448014 master-0 kubenswrapper[7599]: I0313 01:17:12.447890 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_fdcd8438-d33f-490f-a841-8944c58506f8/installer/0.log" Mar 13 01:17:12.681368 master-0 kubenswrapper[7599]: I0313 01:17:12.681312 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_7106c6fe-7c8d-45b9-bc5c-521db743663f/installer/0.log" Mar 13 01:17:12.755869 master-0 kubenswrapper[7599]: I0313 01:17:12.755816 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-fprhw"] Mar 13 01:17:12.756792 master-0 kubenswrapper[7599]: I0313 01:17:12.756759 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.759342 master-0 kubenswrapper[7599]: I0313 01:17:12.759258 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 01:17:12.759468 master-0 kubenswrapper[7599]: I0313 01:17:12.759410 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-24stp" Mar 13 01:17:12.767567 master-0 kubenswrapper[7599]: I0313 01:17:12.766491 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdpt2\" (UniqueName: \"kubernetes.io/projected/3418d0fb-d0ae-4634-a645-dc387a19147f-kube-api-access-tdpt2\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.767567 master-0 kubenswrapper[7599]: I0313 01:17:12.766588 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.767567 master-0 kubenswrapper[7599]: I0313 01:17:12.766629 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3418d0fb-d0ae-4634-a645-dc387a19147f-rootfs\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.767567 master-0 kubenswrapper[7599]: I0313 01:17:12.766664 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.843815 master-0 kubenswrapper[7599]: I0313 01:17:12.843703 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-5dgb8_f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/kube-controller-manager-operator/1.log" Mar 13 01:17:12.869360 master-0 kubenswrapper[7599]: I0313 01:17:12.868317 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdpt2\" (UniqueName: \"kubernetes.io/projected/3418d0fb-d0ae-4634-a645-dc387a19147f-kube-api-access-tdpt2\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.869360 master-0 kubenswrapper[7599]: I0313 01:17:12.868411 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.869360 master-0 kubenswrapper[7599]: I0313 01:17:12.868443 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3418d0fb-d0ae-4634-a645-dc387a19147f-rootfs\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.869360 master-0 kubenswrapper[7599]: I0313 01:17:12.868491 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.869795 master-0 kubenswrapper[7599]: I0313 01:17:12.869537 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3418d0fb-d0ae-4634-a645-dc387a19147f-rootfs\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.869996 master-0 kubenswrapper[7599]: I0313 01:17:12.869946 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.873111 master-0 kubenswrapper[7599]: I0313 01:17:12.872932 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:12.885957 master-0 kubenswrapper[7599]: I0313 01:17:12.885923 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdpt2\" (UniqueName: \"kubernetes.io/projected/3418d0fb-d0ae-4634-a645-dc387a19147f-kube-api-access-tdpt2\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:13.041299 master-0 kubenswrapper[7599]: I0313 01:17:13.041260 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-5dgb8_f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/kube-controller-manager-operator/2.log" Mar 13 01:17:13.122062 master-0 kubenswrapper[7599]: I0313 01:17:13.121795 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:13.144531 master-0 kubenswrapper[7599]: W0313 01:17:13.144435 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3418d0fb_d0ae_4634_a645_dc387a19147f.slice/crio-2f532f863189cb40165138dbb4b485ec37ab7ca8ad6591b3d559de34664f9afe WatchSource:0}: Error finding container 2f532f863189cb40165138dbb4b485ec37ab7ca8ad6591b3d559de34664f9afe: Status 404 returned error can't find the container with id 2f532f863189cb40165138dbb4b485ec37ab7ca8ad6591b3d559de34664f9afe Mar 13 01:17:13.248318 master-0 kubenswrapper[7599]: I0313 01:17:13.248268 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/3.log" Mar 13 01:17:13.347756 master-0 kubenswrapper[7599]: I0313 01:17:13.347689 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" event={"ID":"3418d0fb-d0ae-4634-a645-dc387a19147f","Type":"ContainerStarted","Data":"7bae26fbeb039bb89409ea2b07418b33a068c51b808317d7c8ef9c01bf69e60a"} Mar 13 01:17:13.347756 master-0 kubenswrapper[7599]: I0313 01:17:13.347748 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" event={"ID":"3418d0fb-d0ae-4634-a645-dc387a19147f","Type":"ContainerStarted","Data":"2f532f863189cb40165138dbb4b485ec37ab7ca8ad6591b3d559de34664f9afe"} Mar 13 01:17:13.352074 master-0 kubenswrapper[7599]: I0313 01:17:13.352012 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerStarted","Data":"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21"} Mar 13 01:17:13.352074 master-0 kubenswrapper[7599]: I0313 01:17:13.352073 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerStarted","Data":"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3"} Mar 13 01:17:13.609894 master-0 kubenswrapper[7599]: I0313 01:17:13.609774 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:13.610389 master-0 kubenswrapper[7599]: I0313 01:17:13.609912 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:13.645551 master-0 kubenswrapper[7599]: I0313 01:17:13.645377 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/kube-controller-manager/4.log" Mar 13 01:17:13.652980 master-0 kubenswrapper[7599]: I0313 01:17:13.652925 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:13.852359 master-0 kubenswrapper[7599]: I0313 01:17:13.852266 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-controller-manager-master-0_f78c05e1499b533b83f091333d61f045/cluster-policy-controller/0.log" Mar 13 01:17:14.052819 master-0 kubenswrapper[7599]: I0313 01:17:14.052666 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/0.log" Mar 13 01:17:14.240795 master-0 kubenswrapper[7599]: I0313 01:17:14.240741 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/kube-system_bootstrap-kube-scheduler-master-0_a1a56802af72ce1aac6b5077f1695ac0/kube-scheduler/1.log" Mar 13 01:17:14.359745 master-0 kubenswrapper[7599]: I0313 01:17:14.359686 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" event={"ID":"3418d0fb-d0ae-4634-a645-dc387a19147f","Type":"ContainerStarted","Data":"4c0a8af138907afa44de24da6374eeef1e04e2a9860cc8a5ce2206b7014dea3d"} Mar 13 01:17:14.363964 master-0 kubenswrapper[7599]: I0313 01:17:14.363876 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerStarted","Data":"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe"} Mar 13 01:17:14.364170 master-0 kubenswrapper[7599]: I0313 01:17:14.364014 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="cluster-cloud-controller-manager" containerID="cri-o://c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3" gracePeriod=30 Mar 13 01:17:14.364319 master-0 kubenswrapper[7599]: I0313 01:17:14.364284 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="kube-rbac-proxy" containerID="cri-o://cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe" gracePeriod=30 Mar 13 01:17:14.364403 master-0 kubenswrapper[7599]: I0313 01:17:14.364349 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="config-sync-controllers" containerID="cri-o://6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21" gracePeriod=30 Mar 13 01:17:14.417120 master-0 kubenswrapper[7599]: I0313 01:17:14.417068 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:14.904465 master-0 kubenswrapper[7599]: I0313 01:17:14.897690 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90/installer/0.log" Mar 13 01:17:14.994942 master-0 kubenswrapper[7599]: E0313 01:17:14.994880 7599 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14464536_4f17_4d6f_8867_d68e84bf1b4d.slice/crio-conmon-6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14464536_4f17_4d6f_8867_d68e84bf1b4d.slice/crio-conmon-c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod14464536_4f17_4d6f_8867_d68e84bf1b4d.slice/crio-conmon-cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe.scope\": RecentStats: unable to find data in memory cache]" Mar 13 01:17:15.013926 master-0 kubenswrapper[7599]: I0313 01:17:15.013865 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:15.013926 master-0 kubenswrapper[7599]: I0313 01:17:15.013932 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:15.059357 master-0 kubenswrapper[7599]: I0313 01:17:15.059276 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:15.199659 master-0 kubenswrapper[7599]: I0313 01:17:15.199543 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" podStartSLOduration=3.19949716 podStartE2EDuration="3.19949716s" podCreationTimestamp="2026-03-13 01:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:15.196294883 +0000 UTC m=+294.467974277" watchObservedRunningTime="2026-03-13 01:17:15.19949716 +0000 UTC m=+294.471176554" Mar 13 01:17:15.202503 master-0 kubenswrapper[7599]: I0313 01:17:15.202477 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-8fkz8_c6db75e5-efd1-4bfa-9941-0934d7621ba2/kube-scheduler-operator-container/1.log" Mar 13 01:17:15.212956 master-0 kubenswrapper[7599]: I0313 01:17:15.212928 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-8fkz8_c6db75e5-efd1-4bfa-9941-0934d7621ba2/kube-scheduler-operator-container/2.log" Mar 13 01:17:15.227275 master-0 kubenswrapper[7599]: I0313 01:17:15.227204 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-6bvjn_23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/openshift-apiserver-operator/1.log" Mar 13 01:17:15.248173 master-0 kubenswrapper[7599]: I0313 01:17:15.248106 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver-operator_openshift-apiserver-operator-799b6db4d7-6bvjn_23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/openshift-apiserver-operator/2.log" Mar 13 01:17:15.254961 master-0 kubenswrapper[7599]: I0313 01:17:15.254867 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" podStartSLOduration=3.606530371 podStartE2EDuration="8.254848929s" podCreationTimestamp="2026-03-13 01:17:07 +0000 UTC" firstStartedPulling="2026-03-13 01:17:07.948115721 +0000 UTC m=+287.219795115" lastFinishedPulling="2026-03-13 01:17:12.596434279 +0000 UTC m=+291.868113673" observedRunningTime="2026-03-13 01:17:15.25239977 +0000 UTC m=+294.524079154" watchObservedRunningTime="2026-03-13 01:17:15.254848929 +0000 UTC m=+294.526528323" Mar 13 01:17:15.262952 master-0 kubenswrapper[7599]: I0313 01:17:15.262921 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:15.372379 master-0 kubenswrapper[7599]: I0313 01:17:15.372315 7599 generic.go:334] "Generic (PLEG): container finished" podID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerID="cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe" exitCode=0 Mar 13 01:17:15.372379 master-0 kubenswrapper[7599]: I0313 01:17:15.372355 7599 generic.go:334] "Generic (PLEG): container finished" podID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerID="6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21" exitCode=0 Mar 13 01:17:15.372379 master-0 kubenswrapper[7599]: I0313 01:17:15.372364 7599 generic.go:334] "Generic (PLEG): container finished" podID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerID="c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3" exitCode=0 Mar 13 01:17:15.372379 master-0 kubenswrapper[7599]: I0313 01:17:15.372364 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerDied","Data":"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe"} Mar 13 01:17:15.372882 master-0 kubenswrapper[7599]: I0313 01:17:15.372420 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerDied","Data":"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21"} Mar 13 01:17:15.372882 master-0 kubenswrapper[7599]: I0313 01:17:15.372432 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerDied","Data":"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3"} Mar 13 01:17:15.372882 master-0 kubenswrapper[7599]: I0313 01:17:15.372444 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" event={"ID":"14464536-4f17-4d6f-8867-d68e84bf1b4d","Type":"ContainerDied","Data":"7d7e623e8c0d9e066e1623241bc5f63e9d1f8ed656f8cc7a2cd92ed153ee3235"} Mar 13 01:17:15.372882 master-0 kubenswrapper[7599]: I0313 01:17:15.372428 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj" Mar 13 01:17:15.372882 master-0 kubenswrapper[7599]: I0313 01:17:15.372481 7599 scope.go:117] "RemoveContainer" containerID="cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe" Mar 13 01:17:15.390374 master-0 kubenswrapper[7599]: I0313 01:17:15.390266 7599 scope.go:117] "RemoveContainer" containerID="6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21" Mar 13 01:17:15.412698 master-0 kubenswrapper[7599]: I0313 01:17:15.412590 7599 scope.go:117] "RemoveContainer" containerID="c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3" Mar 13 01:17:15.414081 master-0 kubenswrapper[7599]: I0313 01:17:15.413065 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-auth-proxy-config\") pod \"14464536-4f17-4d6f-8867-d68e84bf1b4d\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " Mar 13 01:17:15.414081 master-0 kubenswrapper[7599]: I0313 01:17:15.413183 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14464536-4f17-4d6f-8867-d68e84bf1b4d-host-etc-kube\") pod \"14464536-4f17-4d6f-8867-d68e84bf1b4d\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " Mar 13 01:17:15.414081 master-0 kubenswrapper[7599]: I0313 01:17:15.413223 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4rhp\" (UniqueName: \"kubernetes.io/projected/14464536-4f17-4d6f-8867-d68e84bf1b4d-kube-api-access-s4rhp\") pod \"14464536-4f17-4d6f-8867-d68e84bf1b4d\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " Mar 13 01:17:15.414081 master-0 kubenswrapper[7599]: I0313 01:17:15.413277 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-images\") pod \"14464536-4f17-4d6f-8867-d68e84bf1b4d\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " Mar 13 01:17:15.414081 master-0 kubenswrapper[7599]: I0313 01:17:15.413300 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14464536-4f17-4d6f-8867-d68e84bf1b4d-cloud-controller-manager-operator-tls\") pod \"14464536-4f17-4d6f-8867-d68e84bf1b4d\" (UID: \"14464536-4f17-4d6f-8867-d68e84bf1b4d\") " Mar 13 01:17:15.414081 master-0 kubenswrapper[7599]: I0313 01:17:15.413815 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/14464536-4f17-4d6f-8867-d68e84bf1b4d-host-etc-kube" (OuterVolumeSpecName: "host-etc-kube") pod "14464536-4f17-4d6f-8867-d68e84bf1b4d" (UID: "14464536-4f17-4d6f-8867-d68e84bf1b4d"). InnerVolumeSpecName "host-etc-kube". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:15.415386 master-0 kubenswrapper[7599]: I0313 01:17:15.415353 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-images" (OuterVolumeSpecName: "images") pod "14464536-4f17-4d6f-8867-d68e84bf1b4d" (UID: "14464536-4f17-4d6f-8867-d68e84bf1b4d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:17:15.416079 master-0 kubenswrapper[7599]: I0313 01:17:15.416049 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "14464536-4f17-4d6f-8867-d68e84bf1b4d" (UID: "14464536-4f17-4d6f-8867-d68e84bf1b4d"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:17:15.421381 master-0 kubenswrapper[7599]: I0313 01:17:15.421045 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14464536-4f17-4d6f-8867-d68e84bf1b4d-cloud-controller-manager-operator-tls" (OuterVolumeSpecName: "cloud-controller-manager-operator-tls") pod "14464536-4f17-4d6f-8867-d68e84bf1b4d" (UID: "14464536-4f17-4d6f-8867-d68e84bf1b4d"). InnerVolumeSpecName "cloud-controller-manager-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:17:15.421381 master-0 kubenswrapper[7599]: I0313 01:17:15.421244 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14464536-4f17-4d6f-8867-d68e84bf1b4d-kube-api-access-s4rhp" (OuterVolumeSpecName: "kube-api-access-s4rhp") pod "14464536-4f17-4d6f-8867-d68e84bf1b4d" (UID: "14464536-4f17-4d6f-8867-d68e84bf1b4d"). InnerVolumeSpecName "kube-api-access-s4rhp". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:17:15.429377 master-0 kubenswrapper[7599]: I0313 01:17:15.429330 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:15.439399 master-0 kubenswrapper[7599]: I0313 01:17:15.439345 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7dbfb86fbb-mc7xz_be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/fix-audit-permissions/0.log" Mar 13 01:17:15.445328 master-0 kubenswrapper[7599]: I0313 01:17:15.445292 7599 scope.go:117] "RemoveContainer" containerID="cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe" Mar 13 01:17:15.446879 master-0 kubenswrapper[7599]: E0313 01:17:15.446820 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe\": container with ID starting with cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe not found: ID does not exist" containerID="cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe" Mar 13 01:17:15.446969 master-0 kubenswrapper[7599]: I0313 01:17:15.446898 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe"} err="failed to get container status \"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe\": rpc error: code = NotFound desc = could not find container \"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe\": container with ID starting with cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe not found: ID does not exist" Mar 13 01:17:15.446969 master-0 kubenswrapper[7599]: I0313 01:17:15.446935 7599 scope.go:117] "RemoveContainer" containerID="6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21" Mar 13 01:17:15.447799 master-0 kubenswrapper[7599]: E0313 01:17:15.447759 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21\": container with ID starting with 6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21 not found: ID does not exist" containerID="6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21" Mar 13 01:17:15.447870 master-0 kubenswrapper[7599]: I0313 01:17:15.447821 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21"} err="failed to get container status \"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21\": rpc error: code = NotFound desc = could not find container \"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21\": container with ID starting with 6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21 not found: ID does not exist" Mar 13 01:17:15.447870 master-0 kubenswrapper[7599]: I0313 01:17:15.447864 7599 scope.go:117] "RemoveContainer" containerID="c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3" Mar 13 01:17:15.448469 master-0 kubenswrapper[7599]: E0313 01:17:15.448429 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3\": container with ID starting with c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3 not found: ID does not exist" containerID="c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3" Mar 13 01:17:15.448469 master-0 kubenswrapper[7599]: I0313 01:17:15.448459 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3"} err="failed to get container status \"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3\": rpc error: code = NotFound desc = could not find container \"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3\": container with ID starting with c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3 not found: ID does not exist" Mar 13 01:17:15.448608 master-0 kubenswrapper[7599]: I0313 01:17:15.448478 7599 scope.go:117] "RemoveContainer" containerID="cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe" Mar 13 01:17:15.449288 master-0 kubenswrapper[7599]: I0313 01:17:15.449218 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe"} err="failed to get container status \"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe\": rpc error: code = NotFound desc = could not find container \"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe\": container with ID starting with cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe not found: ID does not exist" Mar 13 01:17:15.449288 master-0 kubenswrapper[7599]: I0313 01:17:15.449265 7599 scope.go:117] "RemoveContainer" containerID="6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21" Mar 13 01:17:15.449799 master-0 kubenswrapper[7599]: I0313 01:17:15.449753 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21"} err="failed to get container status \"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21\": rpc error: code = NotFound desc = could not find container \"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21\": container with ID starting with 6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21 not found: ID does not exist" Mar 13 01:17:15.449869 master-0 kubenswrapper[7599]: I0313 01:17:15.449802 7599 scope.go:117] "RemoveContainer" containerID="c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3" Mar 13 01:17:15.450228 master-0 kubenswrapper[7599]: I0313 01:17:15.450197 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3"} err="failed to get container status \"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3\": rpc error: code = NotFound desc = could not find container \"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3\": container with ID starting with c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3 not found: ID does not exist" Mar 13 01:17:15.450295 master-0 kubenswrapper[7599]: I0313 01:17:15.450226 7599 scope.go:117] "RemoveContainer" containerID="cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe" Mar 13 01:17:15.450687 master-0 kubenswrapper[7599]: I0313 01:17:15.450637 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe"} err="failed to get container status \"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe\": rpc error: code = NotFound desc = could not find container \"cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe\": container with ID starting with cb322f4f9b35c8b079c2c7876309abe821e6fd78b6b4090c2b95549a31581bbe not found: ID does not exist" Mar 13 01:17:15.450829 master-0 kubenswrapper[7599]: I0313 01:17:15.450691 7599 scope.go:117] "RemoveContainer" containerID="6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21" Mar 13 01:17:15.451222 master-0 kubenswrapper[7599]: I0313 01:17:15.451191 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21"} err="failed to get container status \"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21\": rpc error: code = NotFound desc = could not find container \"6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21\": container with ID starting with 6629bf9ec431d14d04a7e997715e6a3dc782c773f9be18e251f4f1ec6ee32a21 not found: ID does not exist" Mar 13 01:17:15.451267 master-0 kubenswrapper[7599]: I0313 01:17:15.451217 7599 scope.go:117] "RemoveContainer" containerID="c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3" Mar 13 01:17:15.451733 master-0 kubenswrapper[7599]: I0313 01:17:15.451702 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3"} err="failed to get container status \"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3\": rpc error: code = NotFound desc = could not find container \"c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3\": container with ID starting with c89798199be8b1f79eed68b14a6a3d5e155d24ef9ca3b65d837aec80da1dc3c3 not found: ID does not exist" Mar 13 01:17:15.515963 master-0 kubenswrapper[7599]: I0313 01:17:15.515663 7599 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-auth-proxy-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:15.515963 master-0 kubenswrapper[7599]: I0313 01:17:15.515692 7599 reconciler_common.go:293] "Volume detached for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/14464536-4f17-4d6f-8867-d68e84bf1b4d-host-etc-kube\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:15.515963 master-0 kubenswrapper[7599]: I0313 01:17:15.515705 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4rhp\" (UniqueName: \"kubernetes.io/projected/14464536-4f17-4d6f-8867-d68e84bf1b4d-kube-api-access-s4rhp\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:15.515963 master-0 kubenswrapper[7599]: I0313 01:17:15.515716 7599 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/14464536-4f17-4d6f-8867-d68e84bf1b4d-images\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:15.515963 master-0 kubenswrapper[7599]: I0313 01:17:15.515726 7599 reconciler_common.go:293] "Volume detached for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/14464536-4f17-4d6f-8867-d68e84bf1b4d-cloud-controller-manager-operator-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:15.647826 master-0 kubenswrapper[7599]: I0313 01:17:15.647773 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7dbfb86fbb-mc7xz_be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/openshift-apiserver/0.log" Mar 13 01:17:15.730635 master-0 kubenswrapper[7599]: I0313 01:17:15.729882 7599 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj"] Mar 13 01:17:15.744668 master-0 kubenswrapper[7599]: I0313 01:17:15.742418 7599 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-559568b945-rt5bj"] Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.756217 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8"] Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: E0313 01:17:15.756446 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="config-sync-controllers" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.756459 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="config-sync-controllers" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: E0313 01:17:15.756478 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="kube-rbac-proxy" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.756484 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="kube-rbac-proxy" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: E0313 01:17:15.756494 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="cluster-cloud-controller-manager" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.756499 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="cluster-cloud-controller-manager" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.756599 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="config-sync-controllers" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.756616 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="kube-rbac-proxy" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.756630 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" containerName="cluster-cloud-controller-manager" Mar 13 01:17:15.757862 master-0 kubenswrapper[7599]: I0313 01:17:15.757359 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:15.759976 master-0 kubenswrapper[7599]: I0313 01:17:15.759917 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hzxsb" Mar 13 01:17:15.760190 master-0 kubenswrapper[7599]: I0313 01:17:15.760171 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 01:17:15.760634 master-0 kubenswrapper[7599]: I0313 01:17:15.760609 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:17:15.760718 master-0 kubenswrapper[7599]: I0313 01:17:15.760689 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 01:17:15.760829 master-0 kubenswrapper[7599]: I0313 01:17:15.760770 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:17:15.761365 master-0 kubenswrapper[7599]: I0313 01:17:15.761338 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 01:17:15.845540 master-0 kubenswrapper[7599]: I0313 01:17:15.844375 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-apiserver_apiserver-7dbfb86fbb-mc7xz_be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/openshift-apiserver-check-endpoints/0.log" Mar 13 01:17:15.926952 master-0 kubenswrapper[7599]: I0313 01:17:15.926896 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:15.926952 master-0 kubenswrapper[7599]: I0313 01:17:15.926957 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:15.927487 master-0 kubenswrapper[7599]: I0313 01:17:15.927030 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:15.927487 master-0 kubenswrapper[7599]: I0313 01:17:15.927085 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wld\" (UniqueName: \"kubernetes.io/projected/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-kube-api-access-t7wld\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:15.927487 master-0 kubenswrapper[7599]: I0313 01:17:15.927118 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.028987 master-0 kubenswrapper[7599]: I0313 01:17:16.028855 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.028987 master-0 kubenswrapper[7599]: I0313 01:17:16.028959 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.028987 master-0 kubenswrapper[7599]: I0313 01:17:16.028992 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wld\" (UniqueName: \"kubernetes.io/projected/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-kube-api-access-t7wld\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.029558 master-0 kubenswrapper[7599]: I0313 01:17:16.029019 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.029558 master-0 kubenswrapper[7599]: I0313 01:17:16.029066 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.029823 master-0 kubenswrapper[7599]: I0313 01:17:16.029761 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.030184 master-0 kubenswrapper[7599]: I0313 01:17:16.030153 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.030641 master-0 kubenswrapper[7599]: I0313 01:17:16.030608 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.033305 master-0 kubenswrapper[7599]: I0313 01:17:16.033269 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.042529 master-0 kubenswrapper[7599]: I0313 01:17:16.042233 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-8r87t_77e6cd9e-b6ef-491c-a5c3-60dab81fd752/etcd-operator/2.log" Mar 13 01:17:16.059168 master-0 kubenswrapper[7599]: I0313 01:17:16.059116 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wld\" (UniqueName: \"kubernetes.io/projected/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-kube-api-access-t7wld\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.082069 master-0 kubenswrapper[7599]: I0313 01:17:16.081997 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:16.240102 master-0 kubenswrapper[7599]: I0313 01:17:16.240042 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-8r87t_77e6cd9e-b6ef-491c-a5c3-60dab81fd752/etcd-operator/3.log" Mar 13 01:17:16.439266 master-0 kubenswrapper[7599]: I0313 01:17:16.439120 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/2.log" Mar 13 01:17:16.641007 master-0 kubenswrapper[7599]: I0313 01:17:16.640942 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/3.log" Mar 13 01:17:16.847573 master-0 kubenswrapper[7599]: I0313 01:17:16.847462 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager_controller-manager-7f46d696f9-s9d6s_d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/controller-manager/0.log" Mar 13 01:17:17.004637 master-0 kubenswrapper[7599]: I0313 01:17:17.004543 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14464536-4f17-4d6f-8867-d68e84bf1b4d" path="/var/lib/kubelet/pods/14464536-4f17-4d6f-8867-d68e84bf1b4d/volumes" Mar 13 01:17:17.047170 master-0 kubenswrapper[7599]: I0313 01:17:17.047101 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-route-controller-manager_route-controller-manager-6cc78fd984-g55t4_581ff17d-f121-4ece-8e45-81f1f710d163/route-controller-manager/0.log" Mar 13 01:17:17.241655 master-0 kubenswrapper[7599]: I0313 01:17:17.241583 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_catalog-operator-7d9c49f57b-4jttq_6ad2904e-ece9-4d72-8683-c3e691e07497/catalog-operator/0.log" Mar 13 01:17:17.446007 master-0 kubenswrapper[7599]: I0313 01:17:17.445940 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_olm-operator-d64cfc9db-r4gzg_31f19d97-50f9-4486-a8f9-df61ef2b0528/olm-operator/0.log" Mar 13 01:17:17.647777 master-0 kubenswrapper[7599]: I0313 01:17:17.647434 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-pj26h_53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/kube-rbac-proxy/0.log" Mar 13 01:17:17.854548 master-0 kubenswrapper[7599]: I0313 01:17:17.854460 7599 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-pj26h_53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/package-server-manager/0.log" Mar 13 01:17:17.990815 master-0 kubenswrapper[7599]: I0313 01:17:17.990722 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp"] Mar 13 01:17:17.996703 master-0 kubenswrapper[7599]: I0313 01:17:17.996649 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.001033 master-0 kubenswrapper[7599]: I0313 01:17:18.000981 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 01:17:18.001205 master-0 kubenswrapper[7599]: I0313 01:17:18.001174 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-mmsdc" Mar 13 01:17:18.013080 master-0 kubenswrapper[7599]: I0313 01:17:18.012996 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp"] Mar 13 01:17:18.167265 master-0 kubenswrapper[7599]: I0313 01:17:18.163096 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g8n5d\" (UniqueName: \"kubernetes.io/projected/c55a215a-9a95-4f48-8668-9b76503c3044-kube-api-access-g8n5d\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.167265 master-0 kubenswrapper[7599]: I0313 01:17:18.163186 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.167265 master-0 kubenswrapper[7599]: I0313 01:17:18.163307 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.267932 master-0 kubenswrapper[7599]: I0313 01:17:18.266908 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8n5d\" (UniqueName: \"kubernetes.io/projected/c55a215a-9a95-4f48-8668-9b76503c3044-kube-api-access-g8n5d\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.267932 master-0 kubenswrapper[7599]: I0313 01:17:18.267479 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.267932 master-0 kubenswrapper[7599]: I0313 01:17:18.267765 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.269084 master-0 kubenswrapper[7599]: I0313 01:17:18.268963 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.273410 master-0 kubenswrapper[7599]: I0313 01:17:18.273362 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.296139 master-0 kubenswrapper[7599]: I0313 01:17:18.295491 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8n5d\" (UniqueName: \"kubernetes.io/projected/c55a215a-9a95-4f48-8668-9b76503c3044-kube-api-access-g8n5d\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:18.314654 master-0 kubenswrapper[7599]: I0313 01:17:18.314583 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:19.446681 master-0 kubenswrapper[7599]: W0313 01:17:19.446586 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80eb89dc_ccfc_4360_811a_82a3ef6f7b65.slice/crio-28719caedf8b1f4ed31a1dd696057fe3b52449ba6c0d76bcf9bc027a93b14830 WatchSource:0}: Error finding container 28719caedf8b1f4ed31a1dd696057fe3b52449ba6c0d76bcf9bc027a93b14830: Status 404 returned error can't find the container with id 28719caedf8b1f4ed31a1dd696057fe3b52449ba6c0d76bcf9bc027a93b14830 Mar 13 01:17:19.876464 master-0 kubenswrapper[7599]: I0313 01:17:19.876399 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp"] Mar 13 01:17:19.883894 master-0 kubenswrapper[7599]: W0313 01:17:19.883798 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc55a215a_9a95_4f48_8668_9b76503c3044.slice/crio-33bf722525c772142ec0cd09e0392bf59b78686977bb452929548b6bc04bfae5 WatchSource:0}: Error finding container 33bf722525c772142ec0cd09e0392bf59b78686977bb452929548b6bc04bfae5: Status 404 returned error can't find the container with id 33bf722525c772142ec0cd09e0392bf59b78686977bb452929548b6bc04bfae5 Mar 13 01:17:20.211455 master-0 kubenswrapper[7599]: I0313 01:17:20.209492 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-xd626"] Mar 13 01:17:20.211455 master-0 kubenswrapper[7599]: I0313 01:17:20.210736 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" Mar 13 01:17:20.219533 master-0 kubenswrapper[7599]: I0313 01:17:20.219445 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l"] Mar 13 01:17:20.223330 master-0 kubenswrapper[7599]: I0313 01:17:20.219988 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-79f8cd6fdd-kzq6q"] Mar 13 01:17:20.223330 master-0 kubenswrapper[7599]: I0313 01:17:20.220240 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:20.223330 master-0 kubenswrapper[7599]: I0313 01:17:20.220525 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.223330 master-0 kubenswrapper[7599]: I0313 01:17:20.221892 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 01:17:20.223330 master-0 kubenswrapper[7599]: I0313 01:17:20.222191 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 01:17:20.226970 master-0 kubenswrapper[7599]: I0313 01:17:20.226885 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 01:17:20.230503 master-0 kubenswrapper[7599]: I0313 01:17:20.227112 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 01:17:20.230503 master-0 kubenswrapper[7599]: I0313 01:17:20.227288 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 01:17:20.230503 master-0 kubenswrapper[7599]: I0313 01:17:20.227560 7599 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 01:17:20.230503 master-0 kubenswrapper[7599]: I0313 01:17:20.227794 7599 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 01:17:20.231649 master-0 kubenswrapper[7599]: I0313 01:17:20.230855 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-xd626"] Mar 13 01:17:20.233583 master-0 kubenswrapper[7599]: I0313 01:17:20.233482 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l"] Mar 13 01:17:20.402848 master-0 kubenswrapper[7599]: I0313 01:17:20.402772 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caabde8-d49a-431d-afe5-8b283188c11c-service-ca-bundle\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.403044 master-0 kubenswrapper[7599]: I0313 01:17:20.402867 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-rhk4l\" (UID: \"0ff72b58-aca9-46f1-86ca-da8339734ac9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:20.403044 master-0 kubenswrapper[7599]: I0313 01:17:20.402940 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hngc8\" (UniqueName: \"kubernetes.io/projected/2ec42095-36f5-48cf-af9d-e7a60f6cb121-kube-api-access-hngc8\") pod \"network-check-source-7c67b67d47-xd626\" (UID: \"2ec42095-36f5-48cf-af9d-e7a60f6cb121\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" Mar 13 01:17:20.403573 master-0 kubenswrapper[7599]: I0313 01:17:20.403504 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-metrics-certs\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.403733 master-0 kubenswrapper[7599]: I0313 01:17:20.403698 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-stats-auth\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.403845 master-0 kubenswrapper[7599]: I0313 01:17:20.403817 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-default-certificate\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.403976 master-0 kubenswrapper[7599]: I0313 01:17:20.403954 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vccjz\" (UniqueName: \"kubernetes.io/projected/0caabde8-d49a-431d-afe5-8b283188c11c-kube-api-access-vccjz\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.420669 master-0 kubenswrapper[7599]: I0313 01:17:20.420003 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" event={"ID":"c55a215a-9a95-4f48-8668-9b76503c3044","Type":"ContainerStarted","Data":"1d575a80ac3013ddff84273bd8ba888c65fcb8040877e9bbfdc072319c4e21d2"} Mar 13 01:17:20.420669 master-0 kubenswrapper[7599]: I0313 01:17:20.420074 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" event={"ID":"c55a215a-9a95-4f48-8668-9b76503c3044","Type":"ContainerStarted","Data":"735be8b153188a56b409f008bb739a615b04b0b4c113e5995034ae8189be2847"} Mar 13 01:17:20.420669 master-0 kubenswrapper[7599]: I0313 01:17:20.420097 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" event={"ID":"c55a215a-9a95-4f48-8668-9b76503c3044","Type":"ContainerStarted","Data":"33bf722525c772142ec0cd09e0392bf59b78686977bb452929548b6bc04bfae5"} Mar 13 01:17:20.424153 master-0 kubenswrapper[7599]: I0313 01:17:20.424088 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" event={"ID":"2760a216-fd4b-46d9-a4ec-2d3285ec02bd","Type":"ContainerStarted","Data":"746d63b70b482e97e137cf2a5fbc732604b747973c61d366e9b68a115a9813fc"} Mar 13 01:17:20.427176 master-0 kubenswrapper[7599]: I0313 01:17:20.427118 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerStarted","Data":"32e6aea9a2d0b5bfb5397a8b0d83b4b7864301a451107157a16f24b685af041a"} Mar 13 01:17:20.427176 master-0 kubenswrapper[7599]: I0313 01:17:20.427172 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerStarted","Data":"1424e782d7d010eb17f5faeba062e24f9a0ac4b5291d10741b6ebae4bf0fcb9b"} Mar 13 01:17:20.427306 master-0 kubenswrapper[7599]: I0313 01:17:20.427184 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerStarted","Data":"28719caedf8b1f4ed31a1dd696057fe3b52449ba6c0d76bcf9bc027a93b14830"} Mar 13 01:17:20.506529 master-0 kubenswrapper[7599]: I0313 01:17:20.506424 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-metrics-certs\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.507936 master-0 kubenswrapper[7599]: I0313 01:17:20.507852 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-stats-auth\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.508048 master-0 kubenswrapper[7599]: I0313 01:17:20.508027 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-default-certificate\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.509002 master-0 kubenswrapper[7599]: I0313 01:17:20.508355 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vccjz\" (UniqueName: \"kubernetes.io/projected/0caabde8-d49a-431d-afe5-8b283188c11c-kube-api-access-vccjz\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.509002 master-0 kubenswrapper[7599]: I0313 01:17:20.508723 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caabde8-d49a-431d-afe5-8b283188c11c-service-ca-bundle\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.509206 master-0 kubenswrapper[7599]: I0313 01:17:20.509020 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-rhk4l\" (UID: \"0ff72b58-aca9-46f1-86ca-da8339734ac9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:20.509538 master-0 kubenswrapper[7599]: I0313 01:17:20.509440 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngc8\" (UniqueName: \"kubernetes.io/projected/2ec42095-36f5-48cf-af9d-e7a60f6cb121-kube-api-access-hngc8\") pod \"network-check-source-7c67b67d47-xd626\" (UID: \"2ec42095-36f5-48cf-af9d-e7a60f6cb121\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" Mar 13 01:17:20.510592 master-0 kubenswrapper[7599]: I0313 01:17:20.510467 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caabde8-d49a-431d-afe5-8b283188c11c-service-ca-bundle\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.512817 master-0 kubenswrapper[7599]: I0313 01:17:20.512672 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-default-certificate\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.512817 master-0 kubenswrapper[7599]: I0313 01:17:20.512676 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-stats-auth\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.515444 master-0 kubenswrapper[7599]: I0313 01:17:20.515390 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-rhk4l\" (UID: \"0ff72b58-aca9-46f1-86ca-da8339734ac9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:20.516128 master-0 kubenswrapper[7599]: I0313 01:17:20.516057 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-metrics-certs\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.574761 master-0 kubenswrapper[7599]: I0313 01:17:20.574651 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:20.725358 master-0 kubenswrapper[7599]: I0313 01:17:20.720436 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" podStartSLOduration=3.7201980089999998 podStartE2EDuration="3.720198009s" podCreationTimestamp="2026-03-13 01:17:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:20.709348626 +0000 UTC m=+299.981028020" watchObservedRunningTime="2026-03-13 01:17:20.720198009 +0000 UTC m=+299.991877443" Mar 13 01:17:20.741132 master-0 kubenswrapper[7599]: I0313 01:17:20.741066 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vccjz\" (UniqueName: \"kubernetes.io/projected/0caabde8-d49a-431d-afe5-8b283188c11c-kube-api-access-vccjz\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.744666 master-0 kubenswrapper[7599]: I0313 01:17:20.742710 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngc8\" (UniqueName: \"kubernetes.io/projected/2ec42095-36f5-48cf-af9d-e7a60f6cb121-kube-api-access-hngc8\") pod \"network-check-source-7c67b67d47-xd626\" (UID: \"2ec42095-36f5-48cf-af9d-e7a60f6cb121\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" Mar 13 01:17:20.856119 master-0 kubenswrapper[7599]: I0313 01:17:20.851156 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" Mar 13 01:17:20.922089 master-0 kubenswrapper[7599]: I0313 01:17:20.922038 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:20.954037 master-0 kubenswrapper[7599]: W0313 01:17:20.953972 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0caabde8_d49a_431d_afe5_8b283188c11c.slice/crio-f43b125de8c8fd9d38adfd65f25335aed5effea8536c299385f910d4e86c6dd3 WatchSource:0}: Error finding container f43b125de8c8fd9d38adfd65f25335aed5effea8536c299385f910d4e86c6dd3: Status 404 returned error can't find the container with id f43b125de8c8fd9d38adfd65f25335aed5effea8536c299385f910d4e86c6dd3 Mar 13 01:17:20.995901 master-0 kubenswrapper[7599]: I0313 01:17:20.995843 7599 kubelet.go:1505] "Image garbage collection succeeded" Mar 13 01:17:21.021605 master-0 kubenswrapper[7599]: I0313 01:17:21.018799 7599 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 01:17:21.106993 master-0 kubenswrapper[7599]: I0313 01:17:21.106905 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" podStartSLOduration=3.63459123 podStartE2EDuration="14.106874587s" podCreationTimestamp="2026-03-13 01:17:07 +0000 UTC" firstStartedPulling="2026-03-13 01:17:09.052126083 +0000 UTC m=+288.323805467" lastFinishedPulling="2026-03-13 01:17:19.52440943 +0000 UTC m=+298.796088824" observedRunningTime="2026-03-13 01:17:20.770429323 +0000 UTC m=+300.042108757" watchObservedRunningTime="2026-03-13 01:17:21.106874587 +0000 UTC m=+300.378553991" Mar 13 01:17:21.110707 master-0 kubenswrapper[7599]: I0313 01:17:21.110660 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l"] Mar 13 01:17:21.327826 master-0 kubenswrapper[7599]: I0313 01:17:21.327759 7599 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-diagnostics/network-check-source-7c67b67d47-xd626"] Mar 13 01:17:21.436406 master-0 kubenswrapper[7599]: I0313 01:17:21.436333 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" event={"ID":"0caabde8-d49a-431d-afe5-8b283188c11c","Type":"ContainerStarted","Data":"f43b125de8c8fd9d38adfd65f25335aed5effea8536c299385f910d4e86c6dd3"} Mar 13 01:17:21.437390 master-0 kubenswrapper[7599]: I0313 01:17:21.437345 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" event={"ID":"0ff72b58-aca9-46f1-86ca-da8339734ac9","Type":"ContainerStarted","Data":"7045bd9f4a827f56cb7bd9e063ae71240fc184218e9ad8e94a5fef4b4d176a48"} Mar 13 01:17:21.438450 master-0 kubenswrapper[7599]: I0313 01:17:21.438415 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" event={"ID":"2ec42095-36f5-48cf-af9d-e7a60f6cb121","Type":"ContainerStarted","Data":"032c2b20f604f0aca4515b1e3c70d1cee6305981fa2fc0ade62b27cbdcf9dd58"} Mar 13 01:17:21.446236 master-0 kubenswrapper[7599]: I0313 01:17:21.446181 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerStarted","Data":"2391a7e241b9181d991fd8827071d7c786dfd62d6f069f6aca8a7c1236a1146f"} Mar 13 01:17:21.468157 master-0 kubenswrapper[7599]: I0313 01:17:21.465776 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" podStartSLOduration=6.465747854 podStartE2EDuration="6.465747854s" podCreationTimestamp="2026-03-13 01:17:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:21.462228178 +0000 UTC m=+300.733907612" watchObservedRunningTime="2026-03-13 01:17:21.465747854 +0000 UTC m=+300.737427248" Mar 13 01:17:22.457712 master-0 kubenswrapper[7599]: I0313 01:17:22.457622 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" event={"ID":"2ec42095-36f5-48cf-af9d-e7a60f6cb121","Type":"ContainerStarted","Data":"2bb3a92033a6c19faf89a576af344da6973d175b1231eacb28c99caaf1d49da7"} Mar 13 01:17:22.478894 master-0 kubenswrapper[7599]: I0313 01:17:22.478373 7599 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" podStartSLOduration=351.478343504 podStartE2EDuration="5m51.478343504s" podCreationTimestamp="2026-03-13 01:11:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:22.475209529 +0000 UTC m=+301.746888953" watchObservedRunningTime="2026-03-13 01:17:22.478343504 +0000 UTC m=+301.750022908" Mar 13 01:17:22.599792 master-0 kubenswrapper[7599]: I0313 01:17:22.599223 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 01:17:22.599792 master-0 kubenswrapper[7599]: I0313 01:17:22.599591 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" containerID="cri-o://14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d" gracePeriod=30 Mar 13 01:17:22.599792 master-0 kubenswrapper[7599]: I0313 01:17:22.599720 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" containerID="cri-o://01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8" gracePeriod=30 Mar 13 01:17:22.601240 master-0 kubenswrapper[7599]: I0313 01:17:22.601177 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 01:17:22.601651 master-0 kubenswrapper[7599]: E0313 01:17:22.601622 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.601713 master-0 kubenswrapper[7599]: I0313 01:17:22.601653 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.601713 master-0 kubenswrapper[7599]: E0313 01:17:22.601672 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.601713 master-0 kubenswrapper[7599]: I0313 01:17:22.601683 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.601805 master-0 kubenswrapper[7599]: E0313 01:17:22.601728 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 01:17:22.601805 master-0 kubenswrapper[7599]: I0313 01:17:22.601743 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 01:17:22.601805 master-0 kubenswrapper[7599]: E0313 01:17:22.601757 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.601805 master-0 kubenswrapper[7599]: I0313 01:17:22.601765 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.601946 master-0 kubenswrapper[7599]: I0313 01:17:22.601920 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.601946 master-0 kubenswrapper[7599]: I0313 01:17:22.601942 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.602021 master-0 kubenswrapper[7599]: I0313 01:17:22.601958 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.602021 master-0 kubenswrapper[7599]: I0313 01:17:22.601974 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="cluster-policy-controller" Mar 13 01:17:22.602144 master-0 kubenswrapper[7599]: E0313 01:17:22.602102 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.602245 master-0 kubenswrapper[7599]: I0313 01:17:22.602146 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.602245 master-0 kubenswrapper[7599]: E0313 01:17:22.602160 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.602245 master-0 kubenswrapper[7599]: I0313 01:17:22.602170 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.602335 master-0 kubenswrapper[7599]: I0313 01:17:22.602311 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.602335 master-0 kubenswrapper[7599]: I0313 01:17:22.602328 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="f78c05e1499b533b83f091333d61f045" containerName="kube-controller-manager" Mar 13 01:17:22.603341 master-0 kubenswrapper[7599]: I0313 01:17:22.603316 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:22.648575 master-0 kubenswrapper[7599]: I0313 01:17:22.648403 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:22.650137 master-0 kubenswrapper[7599]: I0313 01:17:22.650073 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:22.661874 master-0 kubenswrapper[7599]: I0313 01:17:22.661574 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 01:17:22.752543 master-0 kubenswrapper[7599]: I0313 01:17:22.752435 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:22.752875 master-0 kubenswrapper[7599]: I0313 01:17:22.752574 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:22.752875 master-0 kubenswrapper[7599]: I0313 01:17:22.752593 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:22.752875 master-0 kubenswrapper[7599]: I0313 01:17:22.752648 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:22.954855 master-0 kubenswrapper[7599]: I0313 01:17:22.954772 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:23.074348 master-0 kubenswrapper[7599]: I0313 01:17:23.074154 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 01:17:23.074655 master-0 kubenswrapper[7599]: I0313 01:17:23.074575 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 01:17:23.074779 master-0 kubenswrapper[7599]: I0313 01:17:23.074700 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" containerID="cri-o://9ffa27ab0dc3e98ab44b8a36575c0b8aebd551a30b7af7d3a867758695337923" gracePeriod=30 Mar 13 01:17:23.075183 master-0 kubenswrapper[7599]: E0313 01:17:23.075135 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:23.075183 master-0 kubenswrapper[7599]: I0313 01:17:23.075172 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:23.075605 master-0 kubenswrapper[7599]: E0313 01:17:23.075189 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:23.075605 master-0 kubenswrapper[7599]: I0313 01:17:23.075204 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:23.075684 master-0 kubenswrapper[7599]: I0313 01:17:23.075649 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:23.075684 master-0 kubenswrapper[7599]: I0313 01:17:23.075677 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:23.077977 master-0 kubenswrapper[7599]: I0313 01:17:23.077936 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.125854 master-0 kubenswrapper[7599]: I0313 01:17:23.125786 7599 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 01:17:23.159353 master-0 kubenswrapper[7599]: I0313 01:17:23.159259 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.159353 master-0 kubenswrapper[7599]: I0313 01:17:23.159333 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.262460 master-0 kubenswrapper[7599]: I0313 01:17:23.262389 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.262915 master-0 kubenswrapper[7599]: I0313 01:17:23.262895 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.263071 master-0 kubenswrapper[7599]: I0313 01:17:23.263006 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.263139 master-0 kubenswrapper[7599]: I0313 01:17:23.262631 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.416133 master-0 kubenswrapper[7599]: I0313 01:17:23.414793 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:23.682789 master-0 kubenswrapper[7599]: W0313 01:17:23.682725 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1d3d45b6ce1b3764f9927e623a71adf8.slice/crio-f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59 WatchSource:0}: Error finding container f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59: Status 404 returned error can't find the container with id f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59 Mar 13 01:17:23.688069 master-0 kubenswrapper[7599]: W0313 01:17:23.688025 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod24e04786030519cf5fd9f600ea6710e9.slice/crio-719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7 WatchSource:0}: Error finding container 719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7: Status 404 returned error can't find the container with id 719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7 Mar 13 01:17:23.710584 master-0 kubenswrapper[7599]: I0313 01:17:23.710143 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:17:23.712123 master-0 kubenswrapper[7599]: I0313 01:17:23.712076 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:17:23.735663 master-0 kubenswrapper[7599]: I0313 01:17:23.735564 7599 kubelet.go:2706] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e5815d77-bfd4-459e-9678-c08ac790805d" Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771567 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771633 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771639 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config" (OuterVolumeSpecName: "config") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771661 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771697 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771704 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771727 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets" (OuterVolumeSpecName: "secrets") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:23.771784 master-0 kubenswrapper[7599]: I0313 01:17:23.771751 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 01:17:23.772206 master-0 kubenswrapper[7599]: I0313 01:17:23.771812 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") pod \"a1a56802af72ce1aac6b5077f1695ac0\" (UID: \"a1a56802af72ce1aac6b5077f1695ac0\") " Mar 13 01:17:23.772206 master-0 kubenswrapper[7599]: I0313 01:17:23.771816 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:23.772206 master-0 kubenswrapper[7599]: I0313 01:17:23.771841 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") pod \"f78c05e1499b533b83f091333d61f045\" (UID: \"f78c05e1499b533b83f091333d61f045\") " Mar 13 01:17:23.772206 master-0 kubenswrapper[7599]: I0313 01:17:23.771941 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets" (OuterVolumeSpecName: "secrets") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:23.772206 master-0 kubenswrapper[7599]: I0313 01:17:23.771981 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs" (OuterVolumeSpecName: "logs") pod "f78c05e1499b533b83f091333d61f045" (UID: "f78c05e1499b533b83f091333d61f045"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:23.772206 master-0 kubenswrapper[7599]: I0313 01:17:23.771962 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs" (OuterVolumeSpecName: "logs") pod "a1a56802af72ce1aac6b5077f1695ac0" (UID: "a1a56802af72ce1aac6b5077f1695ac0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:23.772812 master-0 kubenswrapper[7599]: I0313 01:17:23.772551 7599 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:23.772812 master-0 kubenswrapper[7599]: I0313 01:17:23.772574 7599 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:23.772812 master-0 kubenswrapper[7599]: I0313 01:17:23.772584 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:23.772812 master-0 kubenswrapper[7599]: I0313 01:17:23.772596 7599 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:23.772812 master-0 kubenswrapper[7599]: I0313 01:17:23.772608 7599 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/a1a56802af72ce1aac6b5077f1695ac0-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:23.772812 master-0 kubenswrapper[7599]: I0313 01:17:23.772619 7599 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:23.772812 master-0 kubenswrapper[7599]: I0313 01:17:23.772628 7599 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/f78c05e1499b533b83f091333d61f045-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:24.235593 master-0 kubenswrapper[7599]: I0313 01:17:24.235526 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 01:17:24.236717 master-0 kubenswrapper[7599]: I0313 01:17:24.236673 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.237022 master-0 kubenswrapper[7599]: I0313 01:17:24.236961 7599 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 01:17:24.237412 master-0 kubenswrapper[7599]: I0313 01:17:24.237362 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" containerID="cri-o://6c9bd5245949231d7973259139b8774c20bbb32018502eb3bd133d4e8aa89584" gracePeriod=15 Mar 13 01:17:24.237668 master-0 kubenswrapper[7599]: I0313 01:17:24.237622 7599 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://8df1059c68299a3330235cc4d111397a59bfb0c4b40d95af664427109c129231" gracePeriod=15 Mar 13 01:17:24.238968 master-0 kubenswrapper[7599]: I0313 01:17:24.238910 7599 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 01:17:24.239347 master-0 kubenswrapper[7599]: E0313 01:17:24.239326 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 01:17:24.239347 master-0 kubenswrapper[7599]: I0313 01:17:24.239345 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 01:17:24.239492 master-0 kubenswrapper[7599]: E0313 01:17:24.239378 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 01:17:24.239492 master-0 kubenswrapper[7599]: I0313 01:17:24.239385 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 01:17:24.239492 master-0 kubenswrapper[7599]: E0313 01:17:24.239396 7599 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 01:17:24.239492 master-0 kubenswrapper[7599]: I0313 01:17:24.239403 7599 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 01:17:24.239898 master-0 kubenswrapper[7599]: I0313 01:17:24.239556 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 01:17:24.239898 master-0 kubenswrapper[7599]: I0313 01:17:24.239578 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 01:17:24.239898 master-0 kubenswrapper[7599]: I0313 01:17:24.239590 7599 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 01:17:24.241277 master-0 kubenswrapper[7599]: I0313 01:17:24.241246 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.282341 master-0 kubenswrapper[7599]: E0313 01:17:24.282152 7599 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c41bf8f4b1171 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:24e04786030519cf5fd9f600ea6710e9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:17:24.275753329 +0000 UTC m=+303.547432723,LastTimestamp:2026-03-13 01:17:24.275753329 +0000 UTC m=+303.547432723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:17:24.283428 master-0 kubenswrapper[7599]: I0313 01:17:24.283344 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.283788 master-0 kubenswrapper[7599]: I0313 01:17:24.283752 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.284049 master-0 kubenswrapper[7599]: I0313 01:17:24.283987 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.288208 master-0 kubenswrapper[7599]: E0313 01:17:24.284931 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.288583 master-0 kubenswrapper[7599]: I0313 01:17:24.288496 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.288840 master-0 kubenswrapper[7599]: I0313 01:17:24.288777 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.289110 master-0 kubenswrapper[7599]: I0313 01:17:24.289047 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.290764 master-0 kubenswrapper[7599]: I0313 01:17:24.290690 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.291082 master-0 kubenswrapper[7599]: I0313 01:17:24.291051 7599 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.302351 master-0 kubenswrapper[7599]: E0313 01:17:24.302264 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.393266 master-0 kubenswrapper[7599]: I0313 01:17:24.393197 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.393266 master-0 kubenswrapper[7599]: I0313 01:17:24.393245 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393281 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393348 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393375 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393401 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393426 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393448 7599 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393572 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393576 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393625 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393649 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393742 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393784 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393881 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.394089 master-0 kubenswrapper[7599]: I0313 01:17:24.393939 7599 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.481676 master-0 kubenswrapper[7599]: I0313 01:17:24.481595 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" event={"ID":"0caabde8-d49a-431d-afe5-8b283188c11c","Type":"ContainerStarted","Data":"7e76ebbad2e877cf6d5c28c9b5cd3893608f8f807f197ce781dd0020a3075431"} Mar 13 01:17:24.482776 master-0 kubenswrapper[7599]: I0313 01:17:24.482707 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.484993 master-0 kubenswrapper[7599]: I0313 01:17:24.484960 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"b1c12809753fc2546fb8e821c8e7f6bbad80bd3bc2111cc6731d186681cf0988"} Mar 13 01:17:24.485077 master-0 kubenswrapper[7599]: I0313 01:17:24.484996 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82"} Mar 13 01:17:24.485077 master-0 kubenswrapper[7599]: I0313 01:17:24.485014 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7"} Mar 13 01:17:24.488981 master-0 kubenswrapper[7599]: I0313 01:17:24.488874 7599 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8" exitCode=0 Mar 13 01:17:24.488981 master-0 kubenswrapper[7599]: I0313 01:17:24.488907 7599 generic.go:334] "Generic (PLEG): container finished" podID="f78c05e1499b533b83f091333d61f045" containerID="14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d" exitCode=0 Mar 13 01:17:24.488981 master-0 kubenswrapper[7599]: I0313 01:17:24.488945 7599 scope.go:117] "RemoveContainer" containerID="01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8" Mar 13 01:17:24.489131 master-0 kubenswrapper[7599]: I0313 01:17:24.488984 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:17:24.493365 master-0 kubenswrapper[7599]: I0313 01:17:24.493243 7599 generic.go:334] "Generic (PLEG): container finished" podID="7106c6fe-7c8d-45b9-bc5c-521db743663f" containerID="9dea5041e065ce99780170074cdc1fcbcd589815d7a4ea10ac0c5a7ebf2078b0" exitCode=0 Mar 13 01:17:24.493365 master-0 kubenswrapper[7599]: I0313 01:17:24.493343 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"7106c6fe-7c8d-45b9-bc5c-521db743663f","Type":"ContainerDied","Data":"9dea5041e065ce99780170074cdc1fcbcd589815d7a4ea10ac0c5a7ebf2078b0"} Mar 13 01:17:24.494317 master-0 kubenswrapper[7599]: I0313 01:17:24.494256 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.495004 master-0 kubenswrapper[7599]: I0313 01:17:24.494944 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.497027 master-0 kubenswrapper[7599]: I0313 01:17:24.496938 7599 generic.go:334] "Generic (PLEG): container finished" podID="fdcd8438-d33f-490f-a841-8944c58506f8" containerID="263627f8d8439063ebce2b99f2d70b421aed9f9cb196a75460d6a6b14ebb0fe5" exitCode=0 Mar 13 01:17:24.497161 master-0 kubenswrapper[7599]: I0313 01:17:24.497019 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"fdcd8438-d33f-490f-a841-8944c58506f8","Type":"ContainerDied","Data":"263627f8d8439063ebce2b99f2d70b421aed9f9cb196a75460d6a6b14ebb0fe5"} Mar 13 01:17:24.498116 master-0 kubenswrapper[7599]: I0313 01:17:24.498058 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.498797 master-0 kubenswrapper[7599]: I0313 01:17:24.498757 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.499307 master-0 kubenswrapper[7599]: I0313 01:17:24.499257 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.500253 master-0 kubenswrapper[7599]: I0313 01:17:24.500223 7599 generic.go:334] "Generic (PLEG): container finished" podID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" containerID="7b8fcf0165d80adda60451116dbf0d6712f4aa8b3cf335302becbea472ed8b9a" exitCode=0 Mar 13 01:17:24.500332 master-0 kubenswrapper[7599]: I0313 01:17:24.500298 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90","Type":"ContainerDied","Data":"7b8fcf0165d80adda60451116dbf0d6712f4aa8b3cf335302becbea472ed8b9a"} Mar 13 01:17:24.501019 master-0 kubenswrapper[7599]: I0313 01:17:24.500978 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.501453 master-0 kubenswrapper[7599]: I0313 01:17:24.501409 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.501897 master-0 kubenswrapper[7599]: I0313 01:17:24.501854 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.502341 master-0 kubenswrapper[7599]: I0313 01:17:24.502301 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.504139 master-0 kubenswrapper[7599]: I0313 01:17:24.504097 7599 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="10e54ccf1c79035f275fa3427f827eeb618189c70d330140baae622cfa30b962" exitCode=0 Mar 13 01:17:24.504502 master-0 kubenswrapper[7599]: I0313 01:17:24.504166 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"10e54ccf1c79035f275fa3427f827eeb618189c70d330140baae622cfa30b962"} Mar 13 01:17:24.504502 master-0 kubenswrapper[7599]: I0313 01:17:24.504190 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59"} Mar 13 01:17:24.505539 master-0 kubenswrapper[7599]: I0313 01:17:24.505429 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.506170 master-0 kubenswrapper[7599]: I0313 01:17:24.506134 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.506992 master-0 kubenswrapper[7599]: I0313 01:17:24.506920 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.508080 master-0 kubenswrapper[7599]: I0313 01:17:24.507776 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.508445 master-0 kubenswrapper[7599]: I0313 01:17:24.508405 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.510783 master-0 kubenswrapper[7599]: I0313 01:17:24.510710 7599 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="8df1059c68299a3330235cc4d111397a59bfb0c4b40d95af664427109c129231" exitCode=0 Mar 13 01:17:24.513621 master-0 kubenswrapper[7599]: I0313 01:17:24.513573 7599 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="9ffa27ab0dc3e98ab44b8a36575c0b8aebd551a30b7af7d3a867758695337923" exitCode=0 Mar 13 01:17:24.513711 master-0 kubenswrapper[7599]: I0313 01:17:24.513626 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e" Mar 13 01:17:24.513711 master-0 kubenswrapper[7599]: I0313 01:17:24.513635 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:17:24.516251 master-0 kubenswrapper[7599]: I0313 01:17:24.516154 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" event={"ID":"0ff72b58-aca9-46f1-86ca-da8339734ac9","Type":"ContainerStarted","Data":"dc5324ce448f7f0929526b6c9f2cd217eef4b29969f2dce2f06bcc90c6f160cb"} Mar 13 01:17:24.518815 master-0 kubenswrapper[7599]: I0313 01:17:24.516613 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:24.519890 master-0 kubenswrapper[7599]: I0313 01:17:24.519480 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.520404 master-0 kubenswrapper[7599]: I0313 01:17:24.520331 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.521158 master-0 kubenswrapper[7599]: I0313 01:17:24.521118 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.521843 master-0 kubenswrapper[7599]: I0313 01:17:24.521798 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.522765 master-0 kubenswrapper[7599]: I0313 01:17:24.522346 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:24.522765 master-0 kubenswrapper[7599]: I0313 01:17:24.522463 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.523152 master-0 kubenswrapper[7599]: I0313 01:17:24.523051 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.523808 master-0 kubenswrapper[7599]: I0313 01:17:24.523755 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.526110 master-0 kubenswrapper[7599]: I0313 01:17:24.524298 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.526110 master-0 kubenswrapper[7599]: I0313 01:17:24.524826 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.526110 master-0 kubenswrapper[7599]: I0313 01:17:24.525268 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.526110 master-0 kubenswrapper[7599]: I0313 01:17:24.525764 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.526469 master-0 kubenswrapper[7599]: I0313 01:17:24.526428 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.541552 master-0 kubenswrapper[7599]: I0313 01:17:24.541492 7599 scope.go:117] "RemoveContainer" containerID="2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0" Mar 13 01:17:24.586454 master-0 kubenswrapper[7599]: I0313 01:17:24.586395 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:24.604381 master-0 kubenswrapper[7599]: I0313 01:17:24.604320 7599 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:24.634750 master-0 kubenswrapper[7599]: W0313 01:17:24.634703 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf417e14665db2ffffa887ce21c9ff0ed.slice/crio-cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e WatchSource:0}: Error finding container cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e: Status 404 returned error can't find the container with id cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e Mar 13 01:17:24.657693 master-0 kubenswrapper[7599]: W0313 01:17:24.657604 7599 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcdcecc61ff5eeb08bd2a3ac12599e4f9.slice/crio-d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896 WatchSource:0}: Error finding container d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896: Status 404 returned error can't find the container with id d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896 Mar 13 01:17:24.680375 master-0 kubenswrapper[7599]: I0313 01:17:24.680297 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.681653 master-0 kubenswrapper[7599]: I0313 01:17:24.681572 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.683223 master-0 kubenswrapper[7599]: I0313 01:17:24.682593 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.691779 master-0 kubenswrapper[7599]: I0313 01:17:24.683351 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.691779 master-0 kubenswrapper[7599]: I0313 01:17:24.683921 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.691779 master-0 kubenswrapper[7599]: I0313 01:17:24.684532 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.703037 master-0 kubenswrapper[7599]: I0313 01:17:24.701642 7599 scope.go:117] "RemoveContainer" containerID="14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d" Mar 13 01:17:24.726651 master-0 kubenswrapper[7599]: I0313 01:17:24.726587 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.727938 master-0 kubenswrapper[7599]: I0313 01:17:24.727219 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.727938 master-0 kubenswrapper[7599]: I0313 01:17:24.727658 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.728169 master-0 kubenswrapper[7599]: I0313 01:17:24.728081 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.729160 master-0 kubenswrapper[7599]: I0313 01:17:24.728495 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.729160 master-0 kubenswrapper[7599]: I0313 01:17:24.728960 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.729497 master-0 kubenswrapper[7599]: I0313 01:17:24.729452 7599 status_manager.go:851] "Failed to get status for pod" podUID="a1a56802af72ce1aac6b5077f1695ac0" pod="kube-system/bootstrap-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:24.733607 master-0 kubenswrapper[7599]: I0313 01:17:24.733500 7599 scope.go:117] "RemoveContainer" containerID="01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8" Mar 13 01:17:24.734420 master-0 kubenswrapper[7599]: E0313 01:17:24.734315 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8\": container with ID starting with 01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8 not found: ID does not exist" containerID="01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8" Mar 13 01:17:24.734420 master-0 kubenswrapper[7599]: I0313 01:17:24.734344 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8"} err="failed to get container status \"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8\": rpc error: code = NotFound desc = could not find container \"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8\": container with ID starting with 01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8 not found: ID does not exist" Mar 13 01:17:24.734420 master-0 kubenswrapper[7599]: I0313 01:17:24.734364 7599 scope.go:117] "RemoveContainer" containerID="2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0" Mar 13 01:17:24.734980 master-0 kubenswrapper[7599]: E0313 01:17:24.734890 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0\": container with ID starting with 2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0 not found: ID does not exist" containerID="2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0" Mar 13 01:17:24.734980 master-0 kubenswrapper[7599]: I0313 01:17:24.734916 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0"} err="failed to get container status \"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0\": rpc error: code = NotFound desc = could not find container \"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0\": container with ID starting with 2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0 not found: ID does not exist" Mar 13 01:17:24.734980 master-0 kubenswrapper[7599]: I0313 01:17:24.734934 7599 scope.go:117] "RemoveContainer" containerID="14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d" Mar 13 01:17:24.736637 master-0 kubenswrapper[7599]: E0313 01:17:24.736434 7599 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d\": container with ID starting with 14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d not found: ID does not exist" containerID="14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d" Mar 13 01:17:24.736800 master-0 kubenswrapper[7599]: I0313 01:17:24.736499 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d"} err="failed to get container status \"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d\": rpc error: code = NotFound desc = could not find container \"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d\": container with ID starting with 14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d not found: ID does not exist" Mar 13 01:17:24.736800 master-0 kubenswrapper[7599]: I0313 01:17:24.736740 7599 scope.go:117] "RemoveContainer" containerID="01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8" Mar 13 01:17:24.737353 master-0 kubenswrapper[7599]: I0313 01:17:24.737302 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8"} err="failed to get container status \"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8\": rpc error: code = NotFound desc = could not find container \"01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8\": container with ID starting with 01ac650aac3d466652c1aa9d3ffdda3c866130a927dfe3837d36d62745926aa8 not found: ID does not exist" Mar 13 01:17:24.737353 master-0 kubenswrapper[7599]: I0313 01:17:24.737346 7599 scope.go:117] "RemoveContainer" containerID="2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0" Mar 13 01:17:24.737753 master-0 kubenswrapper[7599]: I0313 01:17:24.737699 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0"} err="failed to get container status \"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0\": rpc error: code = NotFound desc = could not find container \"2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0\": container with ID starting with 2784404eee0703d226ba54e2e5bf624c95185fb87ad0887d0590c1b171d87df0 not found: ID does not exist" Mar 13 01:17:24.737906 master-0 kubenswrapper[7599]: I0313 01:17:24.737835 7599 scope.go:117] "RemoveContainer" containerID="14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d" Mar 13 01:17:24.738450 master-0 kubenswrapper[7599]: I0313 01:17:24.738360 7599 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d"} err="failed to get container status \"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d\": rpc error: code = NotFound desc = could not find container \"14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d\": container with ID starting with 14f350f9246a7588968681fffdccfa150f49e086e0f9bdbe1a7793f79fe8c18d not found: ID does not exist" Mar 13 01:17:24.738450 master-0 kubenswrapper[7599]: I0313 01:17:24.738379 7599 scope.go:117] "RemoveContainer" containerID="41a562ba2a46ef687ff091bc533dc160a94bdc1572141710b80e92f2c08eb013" Mar 13 01:17:24.923469 master-0 kubenswrapper[7599]: I0313 01:17:24.923239 7599 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:24.926542 master-0 kubenswrapper[7599]: I0313 01:17:24.926474 7599 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-kzq6q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 01:17:24.926542 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 13 01:17:24.926542 master-0 kubenswrapper[7599]: [+]process-running ok Mar 13 01:17:24.926542 master-0 kubenswrapper[7599]: healthz check failed Mar 13 01:17:24.926752 master-0 kubenswrapper[7599]: I0313 01:17:24.926558 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:17:24.995531 master-0 kubenswrapper[7599]: I0313 01:17:24.995439 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1a56802af72ce1aac6b5077f1695ac0" path="/var/lib/kubelet/pods/a1a56802af72ce1aac6b5077f1695ac0/volumes" Mar 13 01:17:24.996130 master-0 kubenswrapper[7599]: I0313 01:17:24.996082 7599 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f78c05e1499b533b83f091333d61f045" path="/var/lib/kubelet/pods/f78c05e1499b533b83f091333d61f045/volumes" Mar 13 01:17:24.996707 master-0 kubenswrapper[7599]: I0313 01:17:24.996676 7599 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 13 01:17:24.999064 master-0 kubenswrapper[7599]: E0313 01:17:24.998971 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-controller-manager-master-0" Mar 13 01:17:24.999189 master-0 kubenswrapper[7599]: I0313 01:17:24.999059 7599 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 13 01:17:25.000146 master-0 kubenswrapper[7599]: E0313 01:17:25.000078 7599 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/kube-system/pods/bootstrap-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="kube-system/bootstrap-kube-scheduler-master-0" Mar 13 01:17:25.090804 master-0 kubenswrapper[7599]: E0313 01:17:25.090642 7599 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c41bf8f4b1171 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:24e04786030519cf5fd9f600ea6710e9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:17:24.275753329 +0000 UTC m=+303.547432723,LastTimestamp:2026-03-13 01:17:24.275753329 +0000 UTC m=+303.547432723,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:17:25.526680 master-0 kubenswrapper[7599]: I0313 01:17:25.526610 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"44c7d80aa4aadd7ed9cfa67d8c3f0e0defda54140db09140424d6dcf8461fe9e"} Mar 13 01:17:25.526680 master-0 kubenswrapper[7599]: I0313 01:17:25.526677 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"27da9b144d4a4f750c33de749ba64c7d7c2d328ab7a8dc23bb642f52fbaf1fd7"} Mar 13 01:17:25.527032 master-0 kubenswrapper[7599]: I0313 01:17:25.526698 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"36b85103aab608e07fe57ad44e030eaf64a6694fa43ef8b29c17a2a587b80411"} Mar 13 01:17:25.527754 master-0 kubenswrapper[7599]: I0313 01:17:25.527703 7599 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:25.528965 master-0 kubenswrapper[7599]: I0313 01:17:25.528890 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.529967 master-0 kubenswrapper[7599]: I0313 01:17:25.529899 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.529967 master-0 kubenswrapper[7599]: I0313 01:17:25.529949 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerDied","Data":"ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89"} Mar 13 01:17:25.530103 master-0 kubenswrapper[7599]: I0313 01:17:25.529919 7599 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89" exitCode=0 Mar 13 01:17:25.530234 master-0 kubenswrapper[7599]: I0313 01:17:25.530171 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896"} Mar 13 01:17:25.530849 master-0 kubenswrapper[7599]: I0313 01:17:25.530775 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.531425 master-0 kubenswrapper[7599]: I0313 01:17:25.531352 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.531707 master-0 kubenswrapper[7599]: E0313 01:17:25.531587 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:25.532146 master-0 kubenswrapper[7599]: I0313 01:17:25.532097 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.533262 master-0 kubenswrapper[7599]: I0313 01:17:25.533229 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.533995 master-0 kubenswrapper[7599]: I0313 01:17:25.533936 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.534643 master-0 kubenswrapper[7599]: I0313 01:17:25.534577 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.535313 master-0 kubenswrapper[7599]: I0313 01:17:25.535266 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.535964 master-0 kubenswrapper[7599]: I0313 01:17:25.535905 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.536604 master-0 kubenswrapper[7599]: I0313 01:17:25.536551 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.537179 master-0 kubenswrapper[7599]: I0313 01:17:25.537132 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.537313 master-0 kubenswrapper[7599]: I0313 01:17:25.537272 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f417e14665db2ffffa887ce21c9ff0ed","Type":"ContainerStarted","Data":"1343b3441a72fc54f57c90f1ad8e6009baa9cad0afaf07655566864af4172871"} Mar 13 01:17:25.537387 master-0 kubenswrapper[7599]: I0313 01:17:25.537323 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f417e14665db2ffffa887ce21c9ff0ed","Type":"ContainerStarted","Data":"cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e"} Mar 13 01:17:25.538294 master-0 kubenswrapper[7599]: I0313 01:17:25.538241 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.538375 master-0 kubenswrapper[7599]: E0313 01:17:25.538320 7599 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:25.538749 master-0 kubenswrapper[7599]: I0313 01:17:25.538704 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.539159 master-0 kubenswrapper[7599]: I0313 01:17:25.539112 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.539618 master-0 kubenswrapper[7599]: I0313 01:17:25.539577 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.540119 master-0 kubenswrapper[7599]: I0313 01:17:25.540076 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.540614 master-0 kubenswrapper[7599]: I0313 01:17:25.540575 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.542746 master-0 kubenswrapper[7599]: I0313 01:17:25.542700 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa"} Mar 13 01:17:25.542746 master-0 kubenswrapper[7599]: I0313 01:17:25.542739 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b"} Mar 13 01:17:25.544067 master-0 kubenswrapper[7599]: I0313 01:17:25.544007 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.544721 master-0 kubenswrapper[7599]: I0313 01:17:25.544675 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.546441 master-0 kubenswrapper[7599]: I0313 01:17:25.546362 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.547040 master-0 kubenswrapper[7599]: I0313 01:17:25.547000 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.547572 master-0 kubenswrapper[7599]: I0313 01:17:25.547524 7599 status_manager.go:851] "Failed to get status for pod" podUID="24e04786030519cf5fd9f600ea6710e9" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.548310 master-0 kubenswrapper[7599]: I0313 01:17:25.548241 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.549237 master-0 kubenswrapper[7599]: I0313 01:17:25.549176 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.932893 master-0 kubenswrapper[7599]: I0313 01:17:25.932818 7599 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-kzq6q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 01:17:25.932893 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 13 01:17:25.932893 master-0 kubenswrapper[7599]: [+]process-running ok Mar 13 01:17:25.932893 master-0 kubenswrapper[7599]: healthz check failed Mar 13 01:17:25.933503 master-0 kubenswrapper[7599]: I0313 01:17:25.933063 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:17:25.953843 master-0 kubenswrapper[7599]: I0313 01:17:25.953797 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:25.954586 master-0 kubenswrapper[7599]: I0313 01:17:25.954547 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.954925 master-0 kubenswrapper[7599]: I0313 01:17:25.954898 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.955309 master-0 kubenswrapper[7599]: I0313 01:17:25.955269 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.956004 master-0 kubenswrapper[7599]: I0313 01:17:25.955967 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.956416 master-0 kubenswrapper[7599]: I0313 01:17:25.956374 7599 status_manager.go:851] "Failed to get status for pod" podUID="24e04786030519cf5fd9f600ea6710e9" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.956753 master-0 kubenswrapper[7599]: I0313 01:17:25.956720 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.957090 master-0 kubenswrapper[7599]: I0313 01:17:25.957064 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.959686 master-0 kubenswrapper[7599]: I0313 01:17:25.959661 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:25.960159 master-0 kubenswrapper[7599]: I0313 01:17:25.960119 7599 status_manager.go:851] "Failed to get status for pod" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" pod="openshift-kube-controller-manager/installer-2-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/installer-2-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.960578 master-0 kubenswrapper[7599]: I0313 01:17:25.960543 7599 status_manager.go:851] "Failed to get status for pod" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" pod="openshift-kube-apiserver/installer-1-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-1-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.960902 master-0 kubenswrapper[7599]: I0313 01:17:25.960874 7599 status_manager.go:851] "Failed to get status for pod" podUID="1d3d45b6ce1b3764f9927e623a71adf8" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.962269 master-0 kubenswrapper[7599]: I0313 01:17:25.962239 7599 status_manager.go:851] "Failed to get status for pod" podUID="0ff72b58-aca9-46f1-86ca-da8339734ac9" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-operator-admission-webhook-8464df8497-rhk4l\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.962738 master-0 kubenswrapper[7599]: I0313 01:17:25.962695 7599 status_manager.go:851] "Failed to get status for pod" podUID="24e04786030519cf5fd9f600ea6710e9" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.963064 master-0 kubenswrapper[7599]: I0313 01:17:25.963036 7599 status_manager.go:851] "Failed to get status for pod" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-ingress/pods/router-default-79f8cd6fdd-kzq6q\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.963391 master-0 kubenswrapper[7599]: I0313 01:17:25.963364 7599 status_manager.go:851] "Failed to get status for pod" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" pod="openshift-kube-scheduler/installer-4-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-scheduler/pods/installer-4-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:17:25.969892 master-0 kubenswrapper[7599]: I0313 01:17:25.969853 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019495 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019573 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019619 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019688 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"fdcd8438-d33f-490f-a841-8944c58506f8\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019724 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"7106c6fe-7c8d-45b9-bc5c-521db743663f\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019754 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"7106c6fe-7c8d-45b9-bc5c-521db743663f\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019787 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"fdcd8438-d33f-490f-a841-8944c58506f8\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019810 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"7106c6fe-7c8d-45b9-bc5c-521db743663f\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.019838 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"fdcd8438-d33f-490f-a841-8944c58506f8\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.020483 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock" (OuterVolumeSpecName: "var-lock") pod "fdcd8438-d33f-490f-a841-8944c58506f8" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.020575 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock" (OuterVolumeSpecName: "var-lock") pod "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.020621 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7106c6fe-7c8d-45b9-bc5c-521db743663f" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.020646 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock" (OuterVolumeSpecName: "var-lock") pod "7106c6fe-7c8d-45b9-bc5c-521db743663f" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.020676 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fdcd8438-d33f-490f-a841-8944c58506f8" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.020907 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:26.026330 master-0 kubenswrapper[7599]: I0313 01:17:26.026035 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:17:26.027371 master-0 kubenswrapper[7599]: I0313 01:17:26.026572 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fdcd8438-d33f-490f-a841-8944c58506f8" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:17:26.028390 master-0 kubenswrapper[7599]: I0313 01:17:26.028317 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7106c6fe-7c8d-45b9-bc5c-521db743663f" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:17:26.121866 master-0 kubenswrapper[7599]: I0313 01:17:26.121815 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.121866 master-0 kubenswrapper[7599]: I0313 01:17:26.121856 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.121866 master-0 kubenswrapper[7599]: I0313 01:17:26.121867 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.121866 master-0 kubenswrapper[7599]: I0313 01:17:26.121879 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.122140 master-0 kubenswrapper[7599]: I0313 01:17:26.121890 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.122140 master-0 kubenswrapper[7599]: I0313 01:17:26.121900 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.122140 master-0 kubenswrapper[7599]: I0313 01:17:26.121910 7599 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.122140 master-0 kubenswrapper[7599]: I0313 01:17:26.121922 7599 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.122140 master-0 kubenswrapper[7599]: I0313 01:17:26.121931 7599 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:26.561064 master-0 kubenswrapper[7599]: I0313 01:17:26.560965 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3"} Mar 13 01:17:26.561064 master-0 kubenswrapper[7599]: I0313 01:17:26.561068 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6"} Mar 13 01:17:26.563918 master-0 kubenswrapper[7599]: I0313 01:17:26.563404 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:26.563918 master-0 kubenswrapper[7599]: I0313 01:17:26.563448 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-2-master-0" event={"ID":"7106c6fe-7c8d-45b9-bc5c-521db743663f","Type":"ContainerDied","Data":"61d228ad61217efd3f38e7f1eb742a8a47bf9f51d0ed1ddebcc51b7470bf905e"} Mar 13 01:17:26.563918 master-0 kubenswrapper[7599]: I0313 01:17:26.563523 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d228ad61217efd3f38e7f1eb742a8a47bf9f51d0ed1ddebcc51b7470bf905e" Mar 13 01:17:26.575734 master-0 kubenswrapper[7599]: I0313 01:17:26.575438 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-1-master-0" event={"ID":"fdcd8438-d33f-490f-a841-8944c58506f8","Type":"ContainerDied","Data":"5036dd248963b083dbf679edea9371d4e006e42fcff4a71dbda91fde659408c6"} Mar 13 01:17:26.575734 master-0 kubenswrapper[7599]: I0313 01:17:26.575544 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5036dd248963b083dbf679edea9371d4e006e42fcff4a71dbda91fde659408c6" Mar 13 01:17:26.578092 master-0 kubenswrapper[7599]: I0313 01:17:26.575415 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:26.584476 master-0 kubenswrapper[7599]: I0313 01:17:26.584441 7599 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-4-master-0" event={"ID":"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90","Type":"ContainerDied","Data":"c0b9c0cf7cb9fa1122b0ea7980af02b767737d56971625a4ab2e9432fd86c393"} Mar 13 01:17:26.584592 master-0 kubenswrapper[7599]: I0313 01:17:26.584552 7599 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b9c0cf7cb9fa1122b0ea7980af02b767737d56971625a4ab2e9432fd86c393" Mar 13 01:17:26.585125 master-0 kubenswrapper[7599]: I0313 01:17:26.584911 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:26.925282 master-0 kubenswrapper[7599]: I0313 01:17:26.925120 7599 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-kzq6q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 01:17:26.925282 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 13 01:17:26.925282 master-0 kubenswrapper[7599]: [+]process-running ok Mar 13 01:17:26.925282 master-0 kubenswrapper[7599]: healthz check failed Mar 13 01:17:26.925282 master-0 kubenswrapper[7599]: I0313 01:17:26.925212 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:17:26.997691 master-0 kubenswrapper[7599]: I0313 01:17:26.997618 7599 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 13 01:17:27.430595 master-0 kubenswrapper[7599]: I0313 01:17:27.430540 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:17:27.578394 master-0 kubenswrapper[7599]: I0313 01:17:27.578334 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 01:17:27.578394 master-0 kubenswrapper[7599]: I0313 01:17:27.578401 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 01:17:27.578394 master-0 kubenswrapper[7599]: I0313 01:17:27.578418 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 01:17:27.578793 master-0 kubenswrapper[7599]: I0313 01:17:27.578466 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 01:17:27.578793 master-0 kubenswrapper[7599]: I0313 01:17:27.578538 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 01:17:27.578793 master-0 kubenswrapper[7599]: I0313 01:17:27.578578 7599 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") pod \"5f77c8e18b751d90bc0dfe2d4e304050\" (UID: \"5f77c8e18b751d90bc0dfe2d4e304050\") " Mar 13 01:17:27.578933 master-0 kubenswrapper[7599]: I0313 01:17:27.578895 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud" (OuterVolumeSpecName: "etc-kubernetes-cloud") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "etc-kubernetes-cloud". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:27.579367 master-0 kubenswrapper[7599]: I0313 01:17:27.579344 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets" (OuterVolumeSpecName: "secrets") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "secrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:27.579429 master-0 kubenswrapper[7599]: I0313 01:17:27.579371 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config" (OuterVolumeSpecName: "config") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "config". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:27.579429 master-0 kubenswrapper[7599]: I0313 01:17:27.579388 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:27.579429 master-0 kubenswrapper[7599]: I0313 01:17:27.579402 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs" (OuterVolumeSpecName: "logs") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:27.579429 master-0 kubenswrapper[7599]: I0313 01:17:27.579417 7599 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host" (OuterVolumeSpecName: "ssl-certs-host") pod "5f77c8e18b751d90bc0dfe2d4e304050" (UID: "5f77c8e18b751d90bc0dfe2d4e304050"). InnerVolumeSpecName "ssl-certs-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:27.673925 master-0 kubenswrapper[7599]: I0313 01:17:27.669343 7599 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="6c9bd5245949231d7973259139b8774c20bbb32018502eb3bd133d4e8aa89584" exitCode=0 Mar 13 01:17:27.673925 master-0 kubenswrapper[7599]: I0313 01:17:27.669657 7599 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" Mar 13 01:17:27.685739 master-0 kubenswrapper[7599]: I0313 01:17:27.685680 7599 reconciler_common.go:293] "Volume detached for volume \"ssl-certs-host\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-ssl-certs-host\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:27.685739 master-0 kubenswrapper[7599]: I0313 01:17:27.685715 7599 reconciler_common.go:293] "Volume detached for volume \"etc-kubernetes-cloud\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-etc-kubernetes-cloud\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:27.685739 master-0 kubenswrapper[7599]: I0313 01:17:27.685728 7599 reconciler_common.go:293] "Volume detached for volume \"secrets\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-secrets\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:27.685739 master-0 kubenswrapper[7599]: I0313 01:17:27.685739 7599 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:27.685739 master-0 kubenswrapper[7599]: I0313 01:17:27.685747 7599 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:27.686006 master-0 kubenswrapper[7599]: I0313 01:17:27.685758 7599 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/host-path/5f77c8e18b751d90bc0dfe2d4e304050-logs\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:27.935564 master-0 kubenswrapper[7599]: I0313 01:17:27.933132 7599 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-kzq6q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 01:17:27.935564 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 13 01:17:27.935564 master-0 kubenswrapper[7599]: [+]process-running ok Mar 13 01:17:27.935564 master-0 kubenswrapper[7599]: healthz check failed Mar 13 01:17:27.935564 master-0 kubenswrapper[7599]: I0313 01:17:27.933235 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:17:28.926086 master-0 kubenswrapper[7599]: I0313 01:17:28.926016 7599 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-kzq6q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 01:17:28.926086 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 13 01:17:28.926086 master-0 kubenswrapper[7599]: [+]process-running ok Mar 13 01:17:28.926086 master-0 kubenswrapper[7599]: healthz check failed Mar 13 01:17:28.926900 master-0 kubenswrapper[7599]: I0313 01:17:28.926130 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:17:29.930597 master-0 kubenswrapper[7599]: I0313 01:17:29.930409 7599 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-kzq6q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 01:17:29.930597 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 13 01:17:29.930597 master-0 kubenswrapper[7599]: [+]process-running ok Mar 13 01:17:29.930597 master-0 kubenswrapper[7599]: healthz check failed Mar 13 01:17:29.930597 master-0 kubenswrapper[7599]: I0313 01:17:29.930536 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:17:30.926936 master-0 kubenswrapper[7599]: I0313 01:17:30.926821 7599 patch_prober.go:28] interesting pod/router-default-79f8cd6fdd-kzq6q container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Mar 13 01:17:30.926936 master-0 kubenswrapper[7599]: [-]has-synced failed: reason withheld Mar 13 01:17:30.926936 master-0 kubenswrapper[7599]: [+]process-running ok Mar 13 01:17:30.926936 master-0 kubenswrapper[7599]: healthz check failed Mar 13 01:17:30.927468 master-0 kubenswrapper[7599]: I0313 01:17:30.926950 7599 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" podUID="0caabde8-d49a-431d-afe5-8b283188c11c" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Mar 13 01:17:31.876136 master-0 systemd[1]: Stopping Kubernetes Kubelet... Mar 13 01:17:31.915683 master-0 systemd[1]: kubelet.service: Deactivated successfully. Mar 13 01:17:31.916043 master-0 systemd[1]: Stopped Kubernetes Kubelet. Mar 13 01:17:31.918978 master-0 systemd[1]: kubelet.service: Consumed 45.928s CPU time. Mar 13 01:17:31.943141 master-0 systemd[1]: Starting Kubernetes Kubelet... Mar 13 01:17:32.091163 master-0 kubenswrapper[19803]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:17:32.091163 master-0 kubenswrapper[19803]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Mar 13 01:17:32.091163 master-0 kubenswrapper[19803]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:17:32.091163 master-0 kubenswrapper[19803]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:17:32.091163 master-0 kubenswrapper[19803]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 13 01:17:32.091163 master-0 kubenswrapper[19803]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 13 01:17:32.092137 master-0 kubenswrapper[19803]: I0313 01:17:32.091256 19803 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 13 01:17:32.094546 master-0 kubenswrapper[19803]: W0313 01:17:32.094486 19803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:17:32.094620 master-0 kubenswrapper[19803]: W0313 01:17:32.094587 19803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:17:32.094620 master-0 kubenswrapper[19803]: W0313 01:17:32.094600 19803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:17:32.094620 master-0 kubenswrapper[19803]: W0313 01:17:32.094607 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:17:32.094620 master-0 kubenswrapper[19803]: W0313 01:17:32.094615 19803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:17:32.094620 master-0 kubenswrapper[19803]: W0313 01:17:32.094621 19803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094630 19803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094638 19803 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094645 19803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094652 19803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094680 19803 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094686 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094691 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094696 19803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094701 19803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094706 19803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094711 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094716 19803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094721 19803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094727 19803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094732 19803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094738 19803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094743 19803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094749 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094753 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:17:32.094867 master-0 kubenswrapper[19803]: W0313 01:17:32.094759 19803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094764 19803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094769 19803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094774 19803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094779 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094784 19803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094789 19803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094794 19803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094799 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094804 19803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094809 19803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094814 19803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094819 19803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094824 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094829 19803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094834 19803 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094838 19803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094843 19803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094849 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094855 19803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:17:32.095764 master-0 kubenswrapper[19803]: W0313 01:17:32.094860 19803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094865 19803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094871 19803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094876 19803 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094881 19803 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094887 19803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094892 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094897 19803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094903 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094908 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094913 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094918 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094923 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094929 19803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094934 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094940 19803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094946 19803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094951 19803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094956 19803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094963 19803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:17:32.096434 master-0 kubenswrapper[19803]: W0313 01:17:32.094971 19803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: W0313 01:17:32.094978 19803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: W0313 01:17:32.094986 19803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: W0313 01:17:32.094993 19803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: W0313 01:17:32.094999 19803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: W0313 01:17:32.095004 19803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: W0313 01:17:32.095009 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095179 19803 flags.go:64] FLAG: --address="0.0.0.0" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095198 19803 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095211 19803 flags.go:64] FLAG: --anonymous-auth="true" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095220 19803 flags.go:64] FLAG: --application-metrics-count-limit="100" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095230 19803 flags.go:64] FLAG: --authentication-token-webhook="false" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095238 19803 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095249 19803 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095258 19803 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095267 19803 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095277 19803 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095286 19803 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095294 19803 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095302 19803 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095310 19803 flags.go:64] FLAG: --cgroup-root="" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095318 19803 flags.go:64] FLAG: --cgroups-per-qos="true" Mar 13 01:17:32.097138 master-0 kubenswrapper[19803]: I0313 01:17:32.095324 19803 flags.go:64] FLAG: --client-ca-file="" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095330 19803 flags.go:64] FLAG: --cloud-config="" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095337 19803 flags.go:64] FLAG: --cloud-provider="" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095343 19803 flags.go:64] FLAG: --cluster-dns="[]" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095352 19803 flags.go:64] FLAG: --cluster-domain="" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095357 19803 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095364 19803 flags.go:64] FLAG: --config-dir="" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095370 19803 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095377 19803 flags.go:64] FLAG: --container-log-max-files="5" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095385 19803 flags.go:64] FLAG: --container-log-max-size="10Mi" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095396 19803 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095402 19803 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095409 19803 flags.go:64] FLAG: --containerd-namespace="k8s.io" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095415 19803 flags.go:64] FLAG: --contention-profiling="false" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095421 19803 flags.go:64] FLAG: --cpu-cfs-quota="true" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095427 19803 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095433 19803 flags.go:64] FLAG: --cpu-manager-policy="none" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095439 19803 flags.go:64] FLAG: --cpu-manager-policy-options="" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095447 19803 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095453 19803 flags.go:64] FLAG: --enable-controller-attach-detach="true" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095459 19803 flags.go:64] FLAG: --enable-debugging-handlers="true" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095464 19803 flags.go:64] FLAG: --enable-load-reader="false" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095470 19803 flags.go:64] FLAG: --enable-server="true" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095476 19803 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095484 19803 flags.go:64] FLAG: --event-burst="100" Mar 13 01:17:32.097900 master-0 kubenswrapper[19803]: I0313 01:17:32.095492 19803 flags.go:64] FLAG: --event-qps="50" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095499 19803 flags.go:64] FLAG: --event-storage-age-limit="default=0" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095507 19803 flags.go:64] FLAG: --event-storage-event-limit="default=0" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095542 19803 flags.go:64] FLAG: --eviction-hard="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095554 19803 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095562 19803 flags.go:64] FLAG: --eviction-minimum-reclaim="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095569 19803 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095577 19803 flags.go:64] FLAG: --eviction-soft="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095584 19803 flags.go:64] FLAG: --eviction-soft-grace-period="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095590 19803 flags.go:64] FLAG: --exit-on-lock-contention="false" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095596 19803 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095602 19803 flags.go:64] FLAG: --experimental-mounter-path="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095607 19803 flags.go:64] FLAG: --fail-cgroupv1="false" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095613 19803 flags.go:64] FLAG: --fail-swap-on="true" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095619 19803 flags.go:64] FLAG: --feature-gates="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095626 19803 flags.go:64] FLAG: --file-check-frequency="20s" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095632 19803 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095643 19803 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095649 19803 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095655 19803 flags.go:64] FLAG: --healthz-port="10248" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095663 19803 flags.go:64] FLAG: --help="false" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095669 19803 flags.go:64] FLAG: --hostname-override="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095674 19803 flags.go:64] FLAG: --housekeeping-interval="10s" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095680 19803 flags.go:64] FLAG: --http-check-frequency="20s" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095686 19803 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Mar 13 01:17:32.098800 master-0 kubenswrapper[19803]: I0313 01:17:32.095692 19803 flags.go:64] FLAG: --image-credential-provider-config="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095698 19803 flags.go:64] FLAG: --image-gc-high-threshold="85" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095704 19803 flags.go:64] FLAG: --image-gc-low-threshold="80" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095710 19803 flags.go:64] FLAG: --image-service-endpoint="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095716 19803 flags.go:64] FLAG: --kernel-memcg-notification="false" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095721 19803 flags.go:64] FLAG: --kube-api-burst="100" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095727 19803 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095733 19803 flags.go:64] FLAG: --kube-api-qps="50" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095739 19803 flags.go:64] FLAG: --kube-reserved="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095744 19803 flags.go:64] FLAG: --kube-reserved-cgroup="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095750 19803 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095757 19803 flags.go:64] FLAG: --kubelet-cgroups="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095763 19803 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095769 19803 flags.go:64] FLAG: --lock-file="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095776 19803 flags.go:64] FLAG: --log-cadvisor-usage="false" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095782 19803 flags.go:64] FLAG: --log-flush-frequency="5s" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095788 19803 flags.go:64] FLAG: --log-json-info-buffer-size="0" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095797 19803 flags.go:64] FLAG: --log-json-split-stream="false" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095803 19803 flags.go:64] FLAG: --log-text-info-buffer-size="0" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095809 19803 flags.go:64] FLAG: --log-text-split-stream="false" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095815 19803 flags.go:64] FLAG: --logging-format="text" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095821 19803 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095827 19803 flags.go:64] FLAG: --make-iptables-util-chains="true" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095833 19803 flags.go:64] FLAG: --manifest-url="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095841 19803 flags.go:64] FLAG: --manifest-url-header="" Mar 13 01:17:32.100005 master-0 kubenswrapper[19803]: I0313 01:17:32.095848 19803 flags.go:64] FLAG: --max-housekeeping-interval="15s" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095854 19803 flags.go:64] FLAG: --max-open-files="1000000" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095861 19803 flags.go:64] FLAG: --max-pods="110" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095867 19803 flags.go:64] FLAG: --maximum-dead-containers="-1" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095873 19803 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095878 19803 flags.go:64] FLAG: --memory-manager-policy="None" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095885 19803 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095891 19803 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095897 19803 flags.go:64] FLAG: --node-ip="192.168.32.10" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095903 19803 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095916 19803 flags.go:64] FLAG: --node-status-max-images="50" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095923 19803 flags.go:64] FLAG: --node-status-update-frequency="10s" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095928 19803 flags.go:64] FLAG: --oom-score-adj="-999" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095934 19803 flags.go:64] FLAG: --pod-cidr="" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095940 19803 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1d605384f31a8085f78a96145c2c3dc51afe22721144196140a2699b7c07ebe3" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095950 19803 flags.go:64] FLAG: --pod-manifest-path="" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095956 19803 flags.go:64] FLAG: --pod-max-pids="-1" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095962 19803 flags.go:64] FLAG: --pods-per-core="0" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095968 19803 flags.go:64] FLAG: --port="10250" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095975 19803 flags.go:64] FLAG: --protect-kernel-defaults="false" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095981 19803 flags.go:64] FLAG: --provider-id="" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095987 19803 flags.go:64] FLAG: --qos-reserved="" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095993 19803 flags.go:64] FLAG: --read-only-port="10255" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.095999 19803 flags.go:64] FLAG: --register-node="true" Mar 13 01:17:32.101128 master-0 kubenswrapper[19803]: I0313 01:17:32.096004 19803 flags.go:64] FLAG: --register-schedulable="true" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096010 19803 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096020 19803 flags.go:64] FLAG: --registry-burst="10" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096026 19803 flags.go:64] FLAG: --registry-qps="5" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096031 19803 flags.go:64] FLAG: --reserved-cpus="" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096037 19803 flags.go:64] FLAG: --reserved-memory="" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096044 19803 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096052 19803 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096058 19803 flags.go:64] FLAG: --rotate-certificates="false" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096064 19803 flags.go:64] FLAG: --rotate-server-certificates="false" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096069 19803 flags.go:64] FLAG: --runonce="false" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096075 19803 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096081 19803 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096087 19803 flags.go:64] FLAG: --seccomp-default="false" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096093 19803 flags.go:64] FLAG: --serialize-image-pulls="true" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096099 19803 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096105 19803 flags.go:64] FLAG: --storage-driver-db="cadvisor" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096111 19803 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096118 19803 flags.go:64] FLAG: --storage-driver-password="root" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096125 19803 flags.go:64] FLAG: --storage-driver-secure="false" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096131 19803 flags.go:64] FLAG: --storage-driver-table="stats" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096137 19803 flags.go:64] FLAG: --storage-driver-user="root" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096143 19803 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096149 19803 flags.go:64] FLAG: --sync-frequency="1m0s" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096156 19803 flags.go:64] FLAG: --system-cgroups="" Mar 13 01:17:32.102136 master-0 kubenswrapper[19803]: I0313 01:17:32.096162 19803 flags.go:64] FLAG: --system-reserved="cpu=500m,ephemeral-storage=1Gi,memory=1Gi" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096171 19803 flags.go:64] FLAG: --system-reserved-cgroup="" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096177 19803 flags.go:64] FLAG: --tls-cert-file="" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096183 19803 flags.go:64] FLAG: --tls-cipher-suites="[]" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096191 19803 flags.go:64] FLAG: --tls-min-version="" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096197 19803 flags.go:64] FLAG: --tls-private-key-file="" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096202 19803 flags.go:64] FLAG: --topology-manager-policy="none" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096208 19803 flags.go:64] FLAG: --topology-manager-policy-options="" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096214 19803 flags.go:64] FLAG: --topology-manager-scope="container" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096220 19803 flags.go:64] FLAG: --v="2" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096228 19803 flags.go:64] FLAG: --version="false" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096236 19803 flags.go:64] FLAG: --vmodule="" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096243 19803 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: I0313 01:17:32.096249 19803 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096390 19803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096397 19803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096402 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096407 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096412 19803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096417 19803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096423 19803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096428 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096433 19803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:17:32.103010 master-0 kubenswrapper[19803]: W0313 01:17:32.096438 19803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096444 19803 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096451 19803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096457 19803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096462 19803 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096469 19803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096475 19803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096481 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096486 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096491 19803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096496 19803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096501 19803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096506 19803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096531 19803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096536 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096548 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096553 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096558 19803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096563 19803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096567 19803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:17:32.103832 master-0 kubenswrapper[19803]: W0313 01:17:32.096572 19803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096577 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096583 19803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096589 19803 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096594 19803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096599 19803 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096604 19803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096609 19803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096614 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096619 19803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096625 19803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096631 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096637 19803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096642 19803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096647 19803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096652 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096657 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096663 19803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096668 19803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096673 19803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:17:32.104622 master-0 kubenswrapper[19803]: W0313 01:17:32.096678 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096683 19803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096689 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096694 19803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096699 19803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096704 19803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096708 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096715 19803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096722 19803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096728 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096734 19803 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096740 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096745 19803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096751 19803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096756 19803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096763 19803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096768 19803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096773 19803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096779 19803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:17:32.105341 master-0 kubenswrapper[19803]: W0313 01:17:32.096785 19803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:17:32.106077 master-0 kubenswrapper[19803]: W0313 01:17:32.096792 19803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:17:32.106077 master-0 kubenswrapper[19803]: W0313 01:17:32.096798 19803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:17:32.106077 master-0 kubenswrapper[19803]: W0313 01:17:32.096805 19803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:17:32.106077 master-0 kubenswrapper[19803]: I0313 01:17:32.096827 19803 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:17:32.106077 master-0 kubenswrapper[19803]: I0313 01:17:32.105878 19803 server.go:491] "Kubelet version" kubeletVersion="v1.31.14" Mar 13 01:17:32.106077 master-0 kubenswrapper[19803]: I0313 01:17:32.105949 19803 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106087 19803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106106 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106116 19803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106127 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106136 19803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106146 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106154 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106162 19803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106170 19803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106178 19803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106186 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106196 19803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106209 19803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106221 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106230 19803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106238 19803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106247 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106255 19803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:17:32.106301 master-0 kubenswrapper[19803]: W0313 01:17:32.106263 19803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106271 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106279 19803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106287 19803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106295 19803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106304 19803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106314 19803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106323 19803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106332 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106341 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106353 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106362 19803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106370 19803 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106379 19803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106387 19803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106396 19803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106405 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106412 19803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106420 19803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106428 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:17:32.107712 master-0 kubenswrapper[19803]: W0313 01:17:32.106436 19803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106444 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106452 19803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106459 19803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106467 19803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106475 19803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106483 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106491 19803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106499 19803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106536 19803 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106546 19803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106555 19803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106563 19803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106571 19803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106580 19803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106591 19803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106605 19803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106616 19803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106626 19803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:17:32.108665 master-0 kubenswrapper[19803]: W0313 01:17:32.106635 19803 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106645 19803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106654 19803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106662 19803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106670 19803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106678 19803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106687 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106697 19803 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106705 19803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106713 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106722 19803 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106729 19803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106738 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106746 19803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: W0313 01:17:32.106754 19803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:17:32.109373 master-0 kubenswrapper[19803]: I0313 01:17:32.106768 19803 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107024 19803 feature_gate.go:330] unrecognized feature gate: PlatformOperators Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107039 19803 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107051 19803 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107062 19803 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107075 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107084 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107161 19803 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107180 19803 feature_gate.go:330] unrecognized feature gate: Example Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107191 19803 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107199 19803 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107208 19803 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107216 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107225 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107234 19803 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107244 19803 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107253 19803 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107263 19803 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107273 19803 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Mar 13 01:17:32.110080 master-0 kubenswrapper[19803]: W0313 01:17:32.107283 19803 feature_gate.go:330] unrecognized feature gate: SignatureStores Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107292 19803 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107300 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfig Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107308 19803 feature_gate.go:330] unrecognized feature gate: GatewayAPI Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107317 19803 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107325 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107334 19803 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107342 19803 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107352 19803 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107361 19803 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107370 19803 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107378 19803 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107386 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107394 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107401 19803 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107409 19803 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107420 19803 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107431 19803 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107441 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Mar 13 01:17:32.112555 master-0 kubenswrapper[19803]: W0313 01:17:32.107452 19803 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107461 19803 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107472 19803 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107481 19803 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107489 19803 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107499 19803 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107507 19803 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107538 19803 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107547 19803 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107555 19803 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107563 19803 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107571 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107580 19803 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107588 19803 feature_gate.go:330] unrecognized feature gate: NewOLM Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107597 19803 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107605 19803 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107615 19803 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107625 19803 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107637 19803 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107647 19803 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Mar 13 01:17:32.113230 master-0 kubenswrapper[19803]: W0313 01:17:32.107657 19803 feature_gate.go:330] unrecognized feature gate: PinnedImages Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107667 19803 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107678 19803 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107688 19803 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107699 19803 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107710 19803 feature_gate.go:330] unrecognized feature gate: OVNObservability Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107724 19803 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107735 19803 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107747 19803 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107758 19803 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107770 19803 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107780 19803 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107791 19803 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107801 19803 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: W0313 01:17:32.107815 19803 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Mar 13 01:17:32.114107 master-0 kubenswrapper[19803]: I0313 01:17:32.107834 19803 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false StreamingCollectionEncodingToJSON:true StreamingCollectionEncodingToProtobuf:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Mar 13 01:17:32.114664 master-0 kubenswrapper[19803]: I0313 01:17:32.108231 19803 server.go:940] "Client rotation is on, will bootstrap in background" Mar 13 01:17:32.114664 master-0 kubenswrapper[19803]: I0313 01:17:32.114092 19803 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Mar 13 01:17:32.114664 master-0 kubenswrapper[19803]: I0313 01:17:32.114300 19803 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Mar 13 01:17:32.114884 master-0 kubenswrapper[19803]: I0313 01:17:32.114841 19803 server.go:997] "Starting client certificate rotation" Mar 13 01:17:32.114884 master-0 kubenswrapper[19803]: I0313 01:17:32.114872 19803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Mar 13 01:17:32.115201 master-0 kubenswrapper[19803]: I0313 01:17:32.115069 19803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-03-14 01:02:11 +0000 UTC, rotation deadline is 2026-03-13 21:10:01.605454816 +0000 UTC Mar 13 01:17:32.115249 master-0 kubenswrapper[19803]: I0313 01:17:32.115198 19803 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 19h52m29.490263621s for next certificate rotation Mar 13 01:17:32.116092 master-0 kubenswrapper[19803]: I0313 01:17:32.116051 19803 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 01:17:32.118802 master-0 kubenswrapper[19803]: I0313 01:17:32.118746 19803 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Mar 13 01:17:32.124059 master-0 kubenswrapper[19803]: I0313 01:17:32.124010 19803 log.go:25] "Validated CRI v1 runtime API" Mar 13 01:17:32.128892 master-0 kubenswrapper[19803]: I0313 01:17:32.128838 19803 log.go:25] "Validated CRI v1 image API" Mar 13 01:17:32.130940 master-0 kubenswrapper[19803]: I0313 01:17:32.130888 19803 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Mar 13 01:17:32.148120 master-0 kubenswrapper[19803]: I0313 01:17:32.148055 19803 fs.go:135] Filesystem UUIDs: map[157256f6-add8-4ac1-82d5-8fc6c96a0913:/dev/vda3 7B77-95E7:/dev/vda2 910678ff-f77e-4a7d-8d53-86f2ac47a823:/dev/vda4] Mar 13 01:17:32.149722 master-0 kubenswrapper[19803]: I0313 01:17:32.148107 19803 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/032c2b20f604f0aca4515b1e3c70d1cee6305981fa2fc0ade62b27cbdcf9dd58/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/032c2b20f604f0aca4515b1e3c70d1cee6305981fa2fc0ade62b27cbdcf9dd58/userdata/shm major:0 minor:1025 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/061fc67620de1b52747445ea534c41ab6513f37b1f03a4e68b4308398d499797/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/061fc67620de1b52747445ea534c41ab6513f37b1f03a4e68b4308398d499797/userdata/shm major:0 minor:522 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/075e91dd63c1e740e494eccc3ead8f62731d857d106f25bfcfaa922018525117/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/075e91dd63c1e740e494eccc3ead8f62731d857d106f25bfcfaa922018525117/userdata/shm major:0 minor:619 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19/userdata/shm major:0 minor:129 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0/userdata/shm major:0 minor:425 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/19d9989080bb99254df4633b984ed6ac361fb3f67806322eddb375cdee316de2/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/19d9989080bb99254df4633b984ed6ac361fb3f67806322eddb375cdee316de2/userdata/shm major:0 minor:251 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1c7730337a9a87451fb670287a107087b846f8e46926bb6ce0f97f0cb44507c6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1c7730337a9a87451fb670287a107087b846f8e46926bb6ce0f97f0cb44507c6/userdata/shm major:0 minor:525 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ea0ea4e5eed6b85ccc36c4c8c0dc8b3b9419340ae19c9233bb9409a6a59c6b0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ea0ea4e5eed6b85ccc36c4c8c0dc8b3b9419340ae19c9233bb9409a6a59c6b0/userdata/shm major:0 minor:535 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/1ee1fa592b43fd04f438a18672ba5cbe2212eefd748a0d3d95e70d1fbb463e36/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/1ee1fa592b43fd04f438a18672ba5cbe2212eefd748a0d3d95e70d1fbb463e36/userdata/shm major:0 minor:275 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2098d43302ad0e00931b30fb0473a362fee9e9000b89c27552d72a632e47afbd/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2098d43302ad0e00931b30fb0473a362fee9e9000b89c27552d72a632e47afbd/userdata/shm major:0 minor:264 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/21da7cd9c215e50e56d0756a974eda56d485e36242a9ade62bb96f7d9a66d36e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/21da7cd9c215e50e56d0756a974eda56d485e36242a9ade62bb96f7d9a66d36e/userdata/shm major:0 minor:834 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/234a363db8b880e78d95679d40d82c251d6d6e0dfd2a1cd27b2a2de32ddb7344/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/234a363db8b880e78d95679d40d82c251d6d6e0dfd2a1cd27b2a2de32ddb7344/userdata/shm major:0 minor:624 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48/userdata/shm major:0 minor:114 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712/userdata/shm major:0 minor:260 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/28719caedf8b1f4ed31a1dd696057fe3b52449ba6c0d76bcf9bc027a93b14830/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/28719caedf8b1f4ed31a1dd696057fe3b52449ba6c0d76bcf9bc027a93b14830/userdata/shm major:0 minor:891 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/29c3e604fa02f6812f7b745e1345b811004751bfdbd70448e21ada412112c94f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/29c3e604fa02f6812f7b745e1345b811004751bfdbd70448e21ada412112c94f/userdata/shm major:0 minor:628 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/2f532f863189cb40165138dbb4b485ec37ab7ca8ad6591b3d559de34664f9afe/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/2f532f863189cb40165138dbb4b485ec37ab7ca8ad6591b3d559de34664f9afe/userdata/shm major:0 minor:992 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/33bf722525c772142ec0cd09e0392bf59b78686977bb452929548b6bc04bfae5/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/33bf722525c772142ec0cd09e0392bf59b78686977bb452929548b6bc04bfae5/userdata/shm major:0 minor:892 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/343ebb9e9f7133e28dc8b97a72067095722cd38fc5a1cd6bd72819c24b19f9a4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/343ebb9e9f7133e28dc8b97a72067095722cd38fc5a1cd6bd72819c24b19f9a4/userdata/shm major:0 minor:269 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038/userdata/shm major:0 minor:119 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846/userdata/shm major:0 minor:54 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3678d76d6368f04d7424fd0ae731dc627699ae26c8d8180a738d9913435c9819/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3678d76d6368f04d7424fd0ae731dc627699ae26c8d8180a738d9913435c9819/userdata/shm major:0 minor:263 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c/userdata/shm major:0 minor:254 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/3be9d647691aac847285be1df15dfc7365f7b948dc0fd04d51bc4a610b82da33/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/3be9d647691aac847285be1df15dfc7365f7b948dc0fd04d51bc4a610b82da33/userdata/shm major:0 minor:465 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/458ff1fddfd5f3b95a485a4b0cb8e88a31c5825a6f8733cb5141f441c672f2be/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/458ff1fddfd5f3b95a485a4b0cb8e88a31c5825a6f8733cb5141f441c672f2be/userdata/shm major:0 minor:142 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/4ca67e8bef4478f002e4442f5b186c7d786535b25d6573f50f3d477a22f7f668/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/4ca67e8bef4478f002e4442f5b186c7d786535b25d6573f50f3d477a22f7f668/userdata/shm major:0 minor:791 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/50cd4dbba0595bc95bd8379d7cfd780825252615fdd5f10e3bb402ec0d1d10ce/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/50cd4dbba0595bc95bd8379d7cfd780825252615fdd5f10e3bb402ec0d1d10ce/userdata/shm major:0 minor:652 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5cba1e5f698e98df3c15a1fd7c6d0586c623f3939d642ba858d361854e19b48c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5cba1e5f698e98df3c15a1fd7c6d0586c623f3939d642ba858d361854e19b48c/userdata/shm major:0 minor:468 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5cdd48b8a2071aa3abf6b5c8005e72c1dbb38aa6a21e58f6cbdd8c251468cb41/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5cdd48b8a2071aa3abf6b5c8005e72c1dbb38aa6a21e58f6cbdd8c251468cb41/userdata/shm major:0 minor:524 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/5fc26918eff78c25b88ab7c1476de02488bb5aaefb35f371b1d5f4a9fb66fe67/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/5fc26918eff78c25b88ab7c1476de02488bb5aaefb35f371b1d5f4a9fb66fe67/userdata/shm major:0 minor:921 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60159a917a34f7d64b3ba3a186dff388b89b7011483106eb857811a35e9e0fbb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60159a917a34f7d64b3ba3a186dff388b89b7011483106eb857811a35e9e0fbb/userdata/shm major:0 minor:86 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/60de85ba97afadaf001b2cf07b2675a887f7f03299ff0b0c7cf2b1b3a76b1ac0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/60de85ba97afadaf001b2cf07b2675a887f7f03299ff0b0c7cf2b1b3a76b1ac0/userdata/shm major:0 minor:453 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/629398d15647c2b03b039e1c1901983e50f62b43495a0b3d1356a29ab7579f04/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/629398d15647c2b03b039e1c1901983e50f62b43495a0b3d1356a29ab7579f04/userdata/shm major:0 minor:259 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6383bf63a7de4dff04fb7232e0771348dcd4ed98fc693d66e08acc1fc0e8ce69/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6383bf63a7de4dff04fb7232e0771348dcd4ed98fc693d66e08acc1fc0e8ce69/userdata/shm major:0 minor:717 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/65392793bd94fcb00daa4e5e0befa1cdc4621ed4d78484330a8ebe817e639598/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/65392793bd94fcb00daa4e5e0befa1cdc4621ed4d78484330a8ebe817e639598/userdata/shm major:0 minor:559 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/658f47ce3c2ae2a79030288ee1e25fc5980adee4919ddd23b5841d0fa0c0c0bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/658f47ce3c2ae2a79030288ee1e25fc5980adee4919ddd23b5841d0fa0c0c0bb/userdata/shm major:0 minor:521 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52/userdata/shm major:0 minor:273 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/6d6b932de4337ed7b1b29feb31dfecf2b00d8a0c27165dce010504a3cf2e5f0a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/6d6b932de4337ed7b1b29feb31dfecf2b00d8a0c27165dce010504a3cf2e5f0a/userdata/shm major:0 minor:609 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7045bd9f4a827f56cb7bd9e063ae71240fc184218e9ad8e94a5fef4b4d176a48/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7045bd9f4a827f56cb7bd9e063ae71240fc184218e9ad8e94a5fef4b4d176a48/userdata/shm major:0 minor:1036 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7/userdata/shm major:0 minor:61 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/72c7baf13da514fc8287177e18c17708037dccda828bfe98993c839421246be0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/72c7baf13da514fc8287177e18c17708037dccda828bfe98993c839421246be0/userdata/shm major:0 minor:409 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c/userdata/shm major:0 minor:257 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/770fca1b39851d439e2eba8f53f5e8c6629f240ddb04931d7537be93916cfc27/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/770fca1b39851d439e2eba8f53f5e8c6629f240ddb04931d7537be93916cfc27/userdata/shm major:0 minor:526 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/7d990bd61a1e1a51f37259ece6d1d14af9e817f84717aafe9cfddf2f1cc1af71/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/7d990bd61a1e1a51f37259ece6d1d14af9e817f84717aafe9cfddf2f1cc1af71/userdata/shm major:0 minor:793 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/97073f9eaab3f9a84928efdbbff240af7a669518355dadabf3d81bed9aec4570/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/97073f9eaab3f9a84928efdbbff240af7a669518355dadabf3d81bed9aec4570/userdata/shm major:0 minor:844 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d5e008bf9f6b695cb5f727240a0c351d82558f527dcc2602815400da2d730f6/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d5e008bf9f6b695cb5f727240a0c351d82558f527dcc2602815400da2d730f6/userdata/shm major:0 minor:534 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9d7ef7e44d8730ad2d704e378ac9c92d16d1c8fa25bdd5cfebf66d699f0e0906/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9d7ef7e44d8730ad2d704e378ac9c92d16d1c8fa25bdd5cfebf66d699f0e0906/userdata/shm major:0 minor:707 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9/userdata/shm major:0 minor:81 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/9ed0f2af24dce87330ff074848aa9e663492193136113ddae19217ced58912fa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/9ed0f2af24dce87330ff074848aa9e663492193136113ddae19217ced58912fa/userdata/shm major:0 minor:469 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/a1c5dbaa4dceb86f442ef113d610b47a414073825f45b1abbdb54ba9c2a0c83a/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/a1c5dbaa4dceb86f442ef113d610b47a414073825f45b1abbdb54ba9c2a0c83a/userdata/shm major:0 minor:433 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa/userdata/shm major:0 minor:423 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b055cbc200ec047aacb638d82e675e244c203df858dcd01394edc1e4bc014d9f/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b055cbc200ec047aacb638d82e675e244c203df858dcd01394edc1e4bc014d9f/userdata/shm major:0 minor:467 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/b61dc113f1a4bef80c641546e2474c72c189dd507d27eb4f40039500f234ba15/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/b61dc113f1a4bef80c641546e2474c72c189dd507d27eb4f40039500f234ba15/userdata/shm major:0 minor:772 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff/userdata/shm major:0 minor:104 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/be6c496962a8987f21c42524b12c5d8025b66ff294e50520947b2cd7bb0af865/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/be6c496962a8987f21c42524b12c5d8025b66ff294e50520947b2cd7bb0af865/userdata/shm major:0 minor:967 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/c598fb9b925a609d9065bd53d80c03d631ad5c318188796c910960611dc611f4/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/c598fb9b925a609d9065bd53d80c03d631ad5c318188796c910960611dc611f4/userdata/shm major:0 minor:523 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cdd0c71504e94f6dcb39dab229fb181eeb5ab28f2092fb5e419d885709d3d1ae/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cdd0c71504e94f6dcb39dab229fb181eeb5ab28f2092fb5e419d885709d3d1ae/userdata/shm major:0 minor:448 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e/userdata/shm major:0 minor:97 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d243e098a2bf2092df86880b77adaed46c59e61e072be24c44913d8532c87256/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d243e098a2bf2092df86880b77adaed46c59e61e072be24c44913d8532c87256/userdata/shm major:0 minor:98 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d285e2cd3ad810bbe2e32e2bf486a60f25f240f9aaa8797930d7581cb9051bc3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d285e2cd3ad810bbe2e32e2bf486a60f25f240f9aaa8797930d7581cb9051bc3/userdata/shm major:0 minor:520 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896/userdata/shm major:0 minor:414 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/d525ab3b1b5859648620b47f3759af91f036909616f6c49b660fe4a797d2c3f0/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/d525ab3b1b5859648620b47f3759af91f036909616f6c49b660fe4a797d2c3f0/userdata/shm major:0 minor:893 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb/userdata/shm major:0 minor:271 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/de825527d944f688f2acf2625cf8789a7117e73fdf8ca84b446d4e5ce667dc74/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/de825527d944f688f2acf2625cf8789a7117e73fdf8ca84b446d4e5ce667dc74/userdata/shm major:0 minor:245 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/e4bd5af8e1a96f925e1b64f7902f036f84c366c1ef01152f845644c1aa6a1b22/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/e4bd5af8e1a96f925e1b64f7902f036f84c366c1ef01152f845644c1aa6a1b22/userdata/shm major:0 minor:466 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/ec17a1f92974fc202f31cbb68ea7af983419d8c972a92fa5e88ff84c017f8e6d/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/ec17a1f92974fc202f31cbb68ea7af983419d8c972a92fa5e88ff84c017f8e6d/userdata/shm major:0 minor:128 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59/userdata/shm major:0 minor:46 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f26f2fe408a83b7887b45acd945c90cef651bf2e6e61b90316af3ed0a1cd741e/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f26f2fe408a83b7887b45acd945c90cef651bf2e6e61b90316af3ed0a1cd741e/userdata/shm major:0 minor:241 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f43b125de8c8fd9d38adfd65f25335aed5effea8536c299385f910d4e86c6dd3/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f43b125de8c8fd9d38adfd65f25335aed5effea8536c299385f910d4e86c6dd3/userdata/shm major:0 minor:1041 fsType:tmpfs blockSize:0} /run/containers/storage/overlay-containers/f90e074f8ab2848261d7ebd8ff2e240e768ffdf256ecd5a6670700d24212e960/userdata/shm:{mountpoint:/run/containers/storage/overlay-containers/f90e074f8ab2848261d7ebd8ff2e240e768ffdf256ecd5a6670700d24212e960/userdata/shm major:0 minor:807 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/volumes/kubernetes.io~projected/ca-certs major:0 minor:678 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/volumes/kubernetes.io~projected/kube-api-access-nbcg4:{mountpoint:/var/lib/kubelet/pods/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/volumes/kubernetes.io~projected/kube-api-access-nbcg4 major:0 minor:694 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~projected/kube-api-access-vccjz:{mountpoint:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~projected/kube-api-access-vccjz major:0 minor:1038 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/default-certificate:{mountpoint:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/default-certificate major:0 minor:1033 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/metrics-certs major:0 minor:1034 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/stats-auth:{mountpoint:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/stats-auth major:0 minor:1032 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/volumes/kubernetes.io~projected/kube-api-access-5zzqj:{mountpoint:/var/lib/kubelet/pods/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/volumes/kubernetes.io~projected/kube-api-access-5zzqj major:0 minor:450 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/0ff72b58-aca9-46f1-86ca-da8339734ac9/volumes/kubernetes.io~secret/tls-certificates:{mountpoint:/var/lib/kubelet/pods/0ff72b58-aca9-46f1-86ca-da8339734ac9/volumes/kubernetes.io~secret/tls-certificates major:0 minor:1035 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~projected/kube-api-access-q5lg5:{mountpoint:/var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~projected/kube-api-access-q5lg5 major:0 minor:236 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~secret/webhook-certs:{mountpoint:/var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~secret/webhook-certs major:0 minor:517 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~projected/kube-api-access-9npsh:{mountpoint:/var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~projected/kube-api-access-9npsh major:0 minor:811 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~secret/cert major:0 minor:642 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls:{mountpoint:/var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls major:0 minor:810 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~projected/kube-api-access-pqfj5:{mountpoint:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~projected/kube-api-access-pqfj5 major:0 minor:252 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~secret/serving-cert major:0 minor:247 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~projected/kube-api-access-smhrl:{mountpoint:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~projected/kube-api-access-smhrl major:0 minor:229 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert major:0 minor:217 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2581e5b5-8cbb-4fa5-9888-98fb572a6232/volumes/kubernetes.io~projected/kube-api-access-gh7ks:{mountpoint:/var/lib/kubelet/pods/2581e5b5-8cbb-4fa5-9888-98fb572a6232/volumes/kubernetes.io~projected/kube-api-access-gh7ks major:0 minor:825 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2581e5b5-8cbb-4fa5-9888-98fb572a6232/volumes/kubernetes.io~secret/cert:{mountpoint:/var/lib/kubelet/pods/2581e5b5-8cbb-4fa5-9888-98fb572a6232/volumes/kubernetes.io~secret/cert major:0 minor:824 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2760a216-fd4b-46d9-a4ec-2d3285ec02bd/volumes/kubernetes.io~projected/kube-api-access-4lqgs:{mountpoint:/var/lib/kubelet/pods/2760a216-fd4b-46d9-a4ec-2d3285ec02bd/volumes/kubernetes.io~projected/kube-api-access-4lqgs major:0 minor:596 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2760a216-fd4b-46d9-a4ec-2d3285ec02bd/volumes/kubernetes.io~secret/machine-api-operator-tls:{mountpoint:/var/lib/kubelet/pods/2760a216-fd4b-46d9-a4ec-2d3285ec02bd/volumes/kubernetes.io~secret/machine-api-operator-tls major:0 minor:536 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/2ec42095-36f5-48cf-af9d-e7a60f6cb121/volumes/kubernetes.io~projected/kube-api-access-hngc8:{mountpoint:/var/lib/kubelet/pods/2ec42095-36f5-48cf-af9d-e7a60f6cb121/volumes/kubernetes.io~projected/kube-api-access-hngc8 major:0 minor:1039 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~projected/kube-api-access-4bzs5:{mountpoint:/var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~projected/kube-api-access-4bzs5 major:0 minor:249 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~secret/srv-cert major:0 minor:515 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3418d0fb-d0ae-4634-a645-dc387a19147f/volumes/kubernetes.io~projected/kube-api-access-tdpt2:{mountpoint:/var/lib/kubelet/pods/3418d0fb-d0ae-4634-a645-dc387a19147f/volumes/kubernetes.io~projected/kube-api-access-tdpt2 major:0 minor:986 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/3418d0fb-d0ae-4634-a645-dc387a19147f/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/3418d0fb-d0ae-4634-a645-dc387a19147f/volumes/kubernetes.io~secret/proxy-tls major:0 minor:980 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/34889110-f282-4c2c-a2b0-620033559e1b/volumes/kubernetes.io~projected/kube-api-access-tlgsr:{mountpoint:/var/lib/kubelet/pods/34889110-f282-4c2c-a2b0-620033559e1b/volumes/kubernetes.io~projected/kube-api-access-tlgsr major:0 minor:410 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~projected/kube-api-access-jc8xs:{mountpoint:/var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~projected/kube-api-access-jc8xs major:0 minor:230 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls:{mountpoint:/var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls major:0 minor:513 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volume-subpaths/run-systemd/ovnkube-controller/6:{mountpoint:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volume-subpaths/run-systemd/ovnkube-controller/6 major:0 minor:24 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~projected/kube-api-access-n58nf:{mountpoint:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~projected/kube-api-access-n58nf major:0 minor:127 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~secret/ovn-node-metrics-cert:{mountpoint:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~secret/ovn-node-metrics-cert major:0 minor:126 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~projected/kube-api-access-pt5g7:{mountpoint:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~projected/kube-api-access-pt5g7 major:0 minor:648 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/encryption-config major:0 minor:645 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/etcd-client major:0 minor:647 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/serving-cert major:0 minor:646 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~projected/kube-api-access-98t5h:{mountpoint:/var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~projected/kube-api-access-98t5h major:0 minor:250 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~secret/package-server-manager-serving-cert:{mountpoint:/var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~secret/package-server-manager-serving-cert major:0 minor:512 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56e20b21-ba17-46ae-a740-0e7bd45eae5f/volumes/kubernetes.io~projected/kube-api-access-g89p7:{mountpoint:/var/lib/kubelet/pods/56e20b21-ba17-46ae-a740-0e7bd45eae5f/volumes/kubernetes.io~projected/kube-api-access-g89p7 major:0 minor:812 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/56e20b21-ba17-46ae-a740-0e7bd45eae5f/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls:{mountpoint:/var/lib/kubelet/pods/56e20b21-ba17-46ae-a740-0e7bd45eae5f/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls major:0 minor:818 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/581ff17d-f121-4ece-8e45-81f1f710d163/volumes/kubernetes.io~projected/kube-api-access-pgz5w:{mountpoint:/var/lib/kubelet/pods/581ff17d-f121-4ece-8e45-81f1f710d163/volumes/kubernetes.io~projected/kube-api-access-pgz5w major:0 minor:418 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/581ff17d-f121-4ece-8e45-81f1f710d163/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/581ff17d-f121-4ece-8e45-81f1f710d163/volumes/kubernetes.io~secret/serving-cert major:0 minor:411 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65dd1dc7-1b90-40f6-82c9-dee90a1fa852/volumes/kubernetes.io~projected/kube-api-access-vt62j:{mountpoint:/var/lib/kubelet/pods/65dd1dc7-1b90-40f6-82c9-dee90a1fa852/volumes/kubernetes.io~projected/kube-api-access-vt62j major:0 minor:602 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65dd1dc7-1b90-40f6-82c9-dee90a1fa852/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/65dd1dc7-1b90-40f6-82c9-dee90a1fa852/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert major:0 minor:601 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ef9aae-25a5-46c6-adf3-634f8f7a29bc/volumes/kubernetes.io~projected/kube-api-access-psvcz:{mountpoint:/var/lib/kubelet/pods/65ef9aae-25a5-46c6-adf3-634f8f7a29bc/volumes/kubernetes.io~projected/kube-api-access-psvcz major:0 minor:827 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/65ef9aae-25a5-46c6-adf3-634f8f7a29bc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert:{mountpoint:/var/lib/kubelet/pods/65ef9aae-25a5-46c6-adf3-634f8f7a29bc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert major:0 minor:826 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/69da0e58-2ae6-4d4b-b125-77e93df3d660/volumes/kubernetes.io~projected/kube-api-access-pzxv5:{mountpoint:/var/lib/kubelet/pods/69da0e58-2ae6-4d4b-b125-77e93df3d660/volumes/kubernetes.io~projected/kube-api-access-pzxv5 major:0 minor:248 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~projected/kube-api-access-k5gc8:{mountpoint:/var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~projected/kube-api-access-k5gc8 major:0 minor:221 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~secret/srv-cert:{mountpoint:/var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~secret/srv-cert major:0 minor:514 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6da2aac0-42a0-45c2-93ec-b148f5889e8b/volumes/kubernetes.io~projected/kube-api-access-9rtds:{mountpoint:/var/lib/kubelet/pods/6da2aac0-42a0-45c2-93ec-b148f5889e8b/volumes/kubernetes.io~projected/kube-api-access-9rtds major:0 minor:816 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e799871-735a-44e8-8193-24c5bb388928/volumes/kubernetes.io~projected/kube-api-access-jthxn:{mountpoint:/var/lib/kubelet/pods/6e799871-735a-44e8-8193-24c5bb388928/volumes/kubernetes.io~projected/kube-api-access-jthxn major:0 minor:829 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6e799871-735a-44e8-8193-24c5bb388928/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6e799871-735a-44e8-8193-24c5bb388928/volumes/kubernetes.io~secret/serving-cert major:0 minor:828 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~projected/kube-api-access-k8l9r:{mountpoint:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~projected/kube-api-access-k8l9r major:0 minor:235 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~secret/serving-cert major:0 minor:222 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~projected/kube-api-access-8jkzq:{mountpoint:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~projected/kube-api-access-8jkzq major:0 minor:231 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~secret/serving-cert major:0 minor:213 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:238 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/kube-api-access-zpdjh:{mountpoint:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/kube-api-access-zpdjh major:0 minor:240 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~secret/metrics-tls major:0 minor:460 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0/volumes/kubernetes.io~projected/kube-api-access-98t7n:{mountpoint:/var/lib/kubelet/pods/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0/volumes/kubernetes.io~projected/kube-api-access-98t7n major:0 minor:641 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0/volumes/kubernetes.io~secret/samples-operator-tls:{mountpoint:/var/lib/kubelet/pods/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0/volumes/kubernetes.io~secret/samples-operator-tls major:0 minor:640 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~projected/kube-api-access-fz9qf:{mountpoint:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~projected/kube-api-access-fz9qf major:0 minor:225 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/etcd-client:{mountpoint:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/etcd-client major:0 minor:209 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/serving-cert major:0 minor:223 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~projected/kube-api-access-4b8jr:{mountpoint:/var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~projected/kube-api-access-4b8jr major:0 minor:234 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~secret/metrics-tls major:0 minor:462 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e938267-de1f-46f7-bf78-b0b3e810c4fa/volumes/kubernetes.io~projected/kube-api-access-kvmpk:{mountpoint:/var/lib/kubelet/pods/7e938267-de1f-46f7-bf78-b0b3e810c4fa/volumes/kubernetes.io~projected/kube-api-access-kvmpk major:0 minor:954 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/7e938267-de1f-46f7-bf78-b0b3e810c4fa/volumes/kubernetes.io~secret/machine-approver-tls:{mountpoint:/var/lib/kubelet/pods/7e938267-de1f-46f7-bf78-b0b3e810c4fa/volumes/kubernetes.io~secret/machine-approver-tls major:0 minor:953 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80eb89dc-ccfc-4360-811a-82a3ef6f7b65/volumes/kubernetes.io~projected/kube-api-access-t7wld:{mountpoint:/var/lib/kubelet/pods/80eb89dc-ccfc-4360-811a-82a3ef6f7b65/volumes/kubernetes.io~projected/kube-api-access-t7wld major:0 minor:643 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/80eb89dc-ccfc-4360-811a-82a3ef6f7b65/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls:{mountpoint:/var/lib/kubelet/pods/80eb89dc-ccfc-4360-811a-82a3ef6f7b65/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls major:0 minor:92 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~projected/ca-certs:{mountpoint:/var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~projected/ca-certs major:0 minor:675 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~projected/kube-api-access-nd8dv:{mountpoint:/var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~projected/kube-api-access-nd8dv major:0 minor:663 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~secret/catalogserver-certs:{mountpoint:/var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~secret/catalogserver-certs major:0 minor:676 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~projected/kube-api-access-rrvhw:{mountpoint:/var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~projected/kube-api-access-rrvhw major:0 minor:256 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~secret/marketplace-operator-metrics:{mountpoint:/var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~secret/marketplace-operator-metrics major:0 minor:511 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~projected/kube-api-access-t6wzz:{mountpoint:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~projected/kube-api-access-t6wzz major:0 minor:125 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert:{mountpoint:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert major:0 minor:124 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~projected/kube-api-access-b4qsk:{mountpoint:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~projected/kube-api-access-b4qsk major:0 minor:239 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:461 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~secret/node-tuning-operator-tls:{mountpoint:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~secret/node-tuning-operator-tls major:0 minor:459 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/bound-sa-token:{mountpoint:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/bound-sa-token major:0 minor:237 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/kube-api-access-fhk76:{mountpoint:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/kube-api-access-fhk76 major:0 minor:224 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~secret/image-registry-operator-tls:{mountpoint:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~secret/image-registry-operator-tls major:0 minor:463 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95c7493b-ad9d-490e-83f3-aa28750b2b5e/volumes/kubernetes.io~projected/kube-api-access-wds6q:{mountpoint:/var/lib/kubelet/pods/95c7493b-ad9d-490e-83f3-aa28750b2b5e/volumes/kubernetes.io~projected/kube-api-access-wds6q major:0 minor:614 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/95c7493b-ad9d-490e-83f3-aa28750b2b5e/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/95c7493b-ad9d-490e-83f3-aa28750b2b5e/volumes/kubernetes.io~secret/metrics-tls major:0 minor:621 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~projected/kube-api-access-4rg4g:{mountpoint:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~projected/kube-api-access-4rg4g major:0 minor:227 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~secret/serving-cert major:0 minor:214 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9863f7ff-4c8d-42a3-a822-01697cf9c920/volumes/kubernetes.io~projected/kube-api-access-44dmt:{mountpoint:/var/lib/kubelet/pods/9863f7ff-4c8d-42a3-a822-01697cf9c920/volumes/kubernetes.io~projected/kube-api-access-44dmt major:0 minor:801 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d2f93bd-e4ce-4ed2-b249-946338f753ed/volumes/kubernetes.io~projected/kube-api-access-qq6v6:{mountpoint:/var/lib/kubelet/pods/9d2f93bd-e4ce-4ed2-b249-946338f753ed/volumes/kubernetes.io~projected/kube-api-access-qq6v6 major:0 minor:808 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~projected/kube-api-access-pj7cp:{mountpoint:/var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~projected/kube-api-access-pj7cp major:0 minor:123 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~secret/metrics-certs:{mountpoint:/var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~secret/metrics-certs major:0 minor:516 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/volumes/kubernetes.io~projected/kube-api-access major:0 minor:476 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/volumes/kubernetes.io~secret/serving-cert major:0 minor:475 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~projected/kube-api-access-ztdc9:{mountpoint:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~projected/kube-api-access-ztdc9 major:0 minor:226 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~secret/serving-cert major:0 minor:220 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~empty-dir/etc-tuned:{mountpoint:/var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~empty-dir/etc-tuned major:0 minor:603 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~empty-dir/tmp:{mountpoint:/var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~empty-dir/tmp major:0 minor:604 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~projected/kube-api-access-845hm:{mountpoint:/var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~projected/kube-api-access-845hm major:0 minor:591 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/bd264af8-4ced-40c4-b4f6-202bab42d0cb/volumes/kubernetes.io~projected/kube-api-access-xcf2h:{mountpoint:/var/lib/kubelet/pods/bd264af8-4ced-40c4-b4f6-202bab42d0cb/volumes/kubernetes.io~projected/kube-api-access-xcf2h major:0 minor:610 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~projected/kube-api-access-lz8ww:{mountpoint:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~projected/kube-api-access-lz8ww major:0 minor:615 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/encryption-config:{mountpoint:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/encryption-config major:0 minor:613 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/etcd-client:{mo Mar 13 01:17:32.150188 master-0 kubenswrapper[19803]: untpoint:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/etcd-client major:0 minor:611 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/serving-cert major:0 minor:612 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c55a215a-9a95-4f48-8668-9b76503c3044/volumes/kubernetes.io~projected/kube-api-access-g8n5d:{mountpoint:/var/lib/kubelet/pods/c55a215a-9a95-4f48-8668-9b76503c3044/volumes/kubernetes.io~projected/kube-api-access-g8n5d major:0 minor:857 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c55a215a-9a95-4f48-8668-9b76503c3044/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/c55a215a-9a95-4f48-8668-9b76503c3044/volumes/kubernetes.io~secret/proxy-tls major:0 minor:856 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~projected/kube-api-access-txxbg:{mountpoint:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~projected/kube-api-access-txxbg major:0 minor:139 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~secret/webhook-cert major:0 minor:138 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~projected/kube-api-access major:0 minor:244 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~secret/serving-cert major:0 minor:215 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~projected/kube-api-access-2pt2w:{mountpoint:/var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~projected/kube-api-access-2pt2w major:0 minor:838 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~secret/apiservice-cert:{mountpoint:/var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~secret/apiservice-cert major:0 minor:91 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~secret/webhook-cert:{mountpoint:/var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~secret/webhook-cert major:0 minor:585 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d163333f-fda5-4067-ad7c-6f646ae411c8/volumes/kubernetes.io~projected/kube-api-access-v2jgj:{mountpoint:/var/lib/kubelet/pods/d163333f-fda5-4067-ad7c-6f646ae411c8/volumes/kubernetes.io~projected/kube-api-access-v2jgj major:0 minor:233 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/volumes/kubernetes.io~projected/kube-api-access-jvrdt:{mountpoint:/var/lib/kubelet/pods/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/volumes/kubernetes.io~projected/kube-api-access-jvrdt major:0 minor:419 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/volumes/kubernetes.io~secret/serving-cert major:0 minor:417 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d89b5d71-5522-433e-a0bb-f2767332e744/volumes/kubernetes.io~projected/kube-api-access-lmnh2:{mountpoint:/var/lib/kubelet/pods/d89b5d71-5522-433e-a0bb-f2767332e744/volumes/kubernetes.io~projected/kube-api-access-lmnh2 major:0 minor:430 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/d89b5d71-5522-433e-a0bb-f2767332e744/volumes/kubernetes.io~secret/signing-key:{mountpoint:/var/lib/kubelet/pods/d89b5d71-5522-433e-a0bb-f2767332e744/volumes/kubernetes.io~secret/signing-key major:0 minor:429 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbcb4b80-425a-4dd5-93a8-bb462f641ef1/volumes/kubernetes.io~projected/kube-api-access-sd26j:{mountpoint:/var/lib/kubelet/pods/dbcb4b80-425a-4dd5-93a8-bb462f641ef1/volumes/kubernetes.io~projected/kube-api-access-sd26j major:0 minor:90 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dbcb4b80-425a-4dd5-93a8-bb462f641ef1/volumes/kubernetes.io~secret/proxy-tls:{mountpoint:/var/lib/kubelet/pods/dbcb4b80-425a-4dd5-93a8-bb462f641ef1/volumes/kubernetes.io~secret/proxy-tls major:0 minor:85 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~projected/kube-api-access-5xmqc:{mountpoint:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~projected/kube-api-access-5xmqc major:0 minor:99 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~secret/metrics-tls:{mountpoint:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~secret/metrics-tls major:0 minor:43 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/de46c12a-aa3e-442e-bcc4-365d05f50103/volumes/kubernetes.io~projected/kube-api-access-sjkgv:{mountpoint:/var/lib/kubelet/pods/de46c12a-aa3e-442e-bcc4-365d05f50103/volumes/kubernetes.io~projected/kube-api-access-sjkgv major:0 minor:101 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~projected/kube-api-access major:0 minor:232 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~secret/serving-cert major:0 minor:219 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f771149b-9d62-408e-be6f-72f575b1ec42/volumes/kubernetes.io~projected/kube-api-access-qmr7z:{mountpoint:/var/lib/kubelet/pods/f771149b-9d62-408e-be6f-72f575b1ec42/volumes/kubernetes.io~projected/kube-api-access-qmr7z major:0 minor:443 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/volumes/kubernetes.io~projected/kube-api-access-2dlx5:{mountpoint:/var/lib/kubelet/pods/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/volumes/kubernetes.io~projected/kube-api-access-2dlx5 major:0 minor:118 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fb5dee36-70a4-47a4-afc2-d3209a476362/volumes/kubernetes.io~projected/kube-api-access-mvckz:{mountpoint:/var/lib/kubelet/pods/fb5dee36-70a4-47a4-afc2-d3209a476362/volumes/kubernetes.io~projected/kube-api-access-mvckz major:0 minor:650 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~projected/kube-api-access-2nbvg:{mountpoint:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~projected/kube-api-access-2nbvg major:0 minor:228 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~secret/serving-cert major:0 minor:218 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~projected/kube-api-access:{mountpoint:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~projected/kube-api-access major:0 minor:243 fsType:tmpfs blockSize:0} /var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~secret/serving-cert:{mountpoint:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~secret/serving-cert major:0 minor:216 fsType:tmpfs blockSize:0} overlay_0-1005:{mountpoint:/var/lib/containers/storage/overlay/1ecd52a23f504971ab6b4eb4a6f425156f34efa1b303d1a17afd82252e2a2b83/merged major:0 minor:1005 fsType:overlay blockSize:0} overlay_0-1010:{mountpoint:/var/lib/containers/storage/overlay/a5fc47e96f7742f8f716b866a8ec5b92b9b72275ecf14a6065682711a12f72bb/merged major:0 minor:1010 fsType:overlay blockSize:0} overlay_0-1012:{mountpoint:/var/lib/containers/storage/overlay/cbf4bed4c6bcdbcfc7e715b0f76076899684860f646bf2bd9697495f83208f48/merged major:0 minor:1012 fsType:overlay blockSize:0} overlay_0-1014:{mountpoint:/var/lib/containers/storage/overlay/f1e5f007d7464965a32e251403abbbb0ad56c8cb0d1f8197e1cc1574741f3414/merged major:0 minor:1014 fsType:overlay blockSize:0} overlay_0-1023:{mountpoint:/var/lib/containers/storage/overlay/52ad1c1c1be6a39783e1bf0decd339887e0db96546db698bf443149fc92211d4/merged major:0 minor:1023 fsType:overlay blockSize:0} overlay_0-1043:{mountpoint:/var/lib/containers/storage/overlay/5a8d0dbf7f76e6eb53a27e5494289713afb8aaf199ae14a6c1f94025dd47aa62/merged major:0 minor:1043 fsType:overlay blockSize:0} overlay_0-1045:{mountpoint:/var/lib/containers/storage/overlay/19b769ac27c491440ae45543388325c4f9e3f22ccfc88da3bd0e5dc506e64a59/merged major:0 minor:1045 fsType:overlay blockSize:0} overlay_0-1047:{mountpoint:/var/lib/containers/storage/overlay/fab4e14e4f3afd89541ecc0b92b8ca418c7b6eaecee73a4d5a6f43211d772bed/merged major:0 minor:1047 fsType:overlay blockSize:0} overlay_0-1049:{mountpoint:/var/lib/containers/storage/overlay/319320d590087d38003782aa773001ab7e9e13da1f3728e60ae1d7387bc17f07/merged major:0 minor:1049 fsType:overlay blockSize:0} overlay_0-106:{mountpoint:/var/lib/containers/storage/overlay/43109ccaebefc6548cfee70b45bd19623b6c3ac3f8d6d6ecc82a09932bc4a9dd/merged major:0 minor:106 fsType:overlay blockSize:0} overlay_0-1061:{mountpoint:/var/lib/containers/storage/overlay/d311b5bbe95f5e092f7d1f8f145009d73b2e1ae79d082c07231b9870083cdcc1/merged major:0 minor:1061 fsType:overlay blockSize:0} overlay_0-1063:{mountpoint:/var/lib/containers/storage/overlay/b829118ec6e0563d7782432d84d1d15bc77a92b1a36ae614396b38a797d7ab31/merged major:0 minor:1063 fsType:overlay blockSize:0} overlay_0-1066:{mountpoint:/var/lib/containers/storage/overlay/1048ca0262d90008b438d70962e2fda687690158eeb81200fed485b784159d86/merged major:0 minor:1066 fsType:overlay blockSize:0} overlay_0-1073:{mountpoint:/var/lib/containers/storage/overlay/252aac3e069c9a90303b56609f4fd4dd1eb709ee3a9e4d71d1e03d545451c9cf/merged major:0 minor:1073 fsType:overlay blockSize:0} overlay_0-1078:{mountpoint:/var/lib/containers/storage/overlay/4543ae6cfb25660accd22386d76ef3d35d02705482bbd63ec50b21c6185b3a86/merged major:0 minor:1078 fsType:overlay blockSize:0} overlay_0-109:{mountpoint:/var/lib/containers/storage/overlay/06165c04bb5a67e7df3a957bfb4c715e70d9ce18f8453a9ab61dab8ac8854dc7/merged major:0 minor:109 fsType:overlay blockSize:0} overlay_0-111:{mountpoint:/var/lib/containers/storage/overlay/7edb65437f20e2f41637c9417b19c24fa58507eb4b2450330bf86a4e462baf2f/merged major:0 minor:111 fsType:overlay blockSize:0} overlay_0-116:{mountpoint:/var/lib/containers/storage/overlay/895885a58ff1a4adf2a6e4cb9e0fb01c8a921c27537a5abc59f2f60cbf819c10/merged major:0 minor:116 fsType:overlay blockSize:0} overlay_0-121:{mountpoint:/var/lib/containers/storage/overlay/2987bc90bb2b585659ecf426af4f82d579b3f0803d5d8492bd1d7d37c7bc8b87/merged major:0 minor:121 fsType:overlay blockSize:0} overlay_0-132:{mountpoint:/var/lib/containers/storage/overlay/8fafe0b081f68d169f9afd99488cd14bee8ddf0a709fe8db9c921f1e7f58c664/merged major:0 minor:132 fsType:overlay blockSize:0} overlay_0-134:{mountpoint:/var/lib/containers/storage/overlay/0abb975f7bf2daa3e2a9d1927541c1ad7d29b662a94c813ddb68169284d80cd3/merged major:0 minor:134 fsType:overlay blockSize:0} overlay_0-136:{mountpoint:/var/lib/containers/storage/overlay/6fc1cbc561fe4b92911d03ff123eac7408ff2f5bedea41f0bc5357fec565ff69/merged major:0 minor:136 fsType:overlay blockSize:0} overlay_0-140:{mountpoint:/var/lib/containers/storage/overlay/bf5bd8d4fd28886c648d43de7514c52876847fdb963accee5ed07d5cd4cb4107/merged major:0 minor:140 fsType:overlay blockSize:0} overlay_0-150:{mountpoint:/var/lib/containers/storage/overlay/6d3acb760c4b20d9748fdc3333ff0040f73b487a453566edfd46f07e36253b24/merged major:0 minor:150 fsType:overlay blockSize:0} overlay_0-152:{mountpoint:/var/lib/containers/storage/overlay/fbabe8d8a7c8f385ef3ce61da074874df1f88af26e33844ccb563a22aa890c2d/merged major:0 minor:152 fsType:overlay blockSize:0} overlay_0-156:{mountpoint:/var/lib/containers/storage/overlay/42d2b780b420ed44dd4846985307d8cc760a4d46d4226dfa4d0f44ea4852afd9/merged major:0 minor:156 fsType:overlay blockSize:0} overlay_0-160:{mountpoint:/var/lib/containers/storage/overlay/70d55512a5d0b672370246657814e997fcf93d175d4a524c3ccb8f6300437869/merged major:0 minor:160 fsType:overlay blockSize:0} overlay_0-171:{mountpoint:/var/lib/containers/storage/overlay/f2f1c108bd07dbd71690c3e28c5dc74ddbd1e4a2880c166a69a1bf01c89889e1/merged major:0 minor:171 fsType:overlay blockSize:0} overlay_0-179:{mountpoint:/var/lib/containers/storage/overlay/d17c96ebf95bfd571c8fb1756eea80cd6ba9b0df7245f6951672a67455f18052/merged major:0 minor:179 fsType:overlay blockSize:0} overlay_0-184:{mountpoint:/var/lib/containers/storage/overlay/7d2edde02cc7f4b6bf935eace6543a41fa7744c9ae48aba8e07cf2d9c1ca2eb0/merged major:0 minor:184 fsType:overlay blockSize:0} overlay_0-189:{mountpoint:/var/lib/containers/storage/overlay/68a3cb19fc6979295f3fed3acc6389d459ffe1291c51c7b4f4bb3f988fcd43f4/merged major:0 minor:189 fsType:overlay blockSize:0} overlay_0-194:{mountpoint:/var/lib/containers/storage/overlay/11a8ffcde96a338ac18cee2bc5c119881aedcf3788dd07fccc369ccf48b7f0ab/merged major:0 minor:194 fsType:overlay blockSize:0} overlay_0-195:{mountpoint:/var/lib/containers/storage/overlay/620129c40abdc0746af65c3a4f4fa9668fbf7f05a7f75342fac6d5cdbca04eab/merged major:0 minor:195 fsType:overlay blockSize:0} overlay_0-204:{mountpoint:/var/lib/containers/storage/overlay/b951a512676ab97a2a776eedfce14119387a9d0504d88798974aeee6c8b6ca3a/merged major:0 minor:204 fsType:overlay blockSize:0} overlay_0-267:{mountpoint:/var/lib/containers/storage/overlay/5b554684d419eaf55cdda2ade3052c248dc6fcba4bb1208c94e14666effe2056/merged major:0 minor:267 fsType:overlay blockSize:0} overlay_0-277:{mountpoint:/var/lib/containers/storage/overlay/6a989c40041f293bbfbd96cf1b9920712ffafdaf6fc6d787fd47f1e491d7f557/merged major:0 minor:277 fsType:overlay blockSize:0} overlay_0-279:{mountpoint:/var/lib/containers/storage/overlay/99e3df40c144e27e2b40cc2e982f4769acf8a5a3087eaca635c8742594ce9773/merged major:0 minor:279 fsType:overlay blockSize:0} overlay_0-283:{mountpoint:/var/lib/containers/storage/overlay/a23069968a762dfe9226e0114302dd2e09eea311e96f5f22bfdfb5b6b71cecc9/merged major:0 minor:283 fsType:overlay blockSize:0} overlay_0-285:{mountpoint:/var/lib/containers/storage/overlay/ae1d86132d00d8c198aa7e25f3be046e42bd41f31b5fe4fbe63980df035367d6/merged major:0 minor:285 fsType:overlay blockSize:0} overlay_0-287:{mountpoint:/var/lib/containers/storage/overlay/7f4896967f5bd54424fad276ba8ca08288d9698151834291d04f150cf7eeb094/merged major:0 minor:287 fsType:overlay blockSize:0} overlay_0-289:{mountpoint:/var/lib/containers/storage/overlay/ae13b16536c497bcde65d74b75f4dac6a76280afb57ce16f33c179edb20707f4/merged major:0 minor:289 fsType:overlay blockSize:0} overlay_0-292:{mountpoint:/var/lib/containers/storage/overlay/167fdc9aff360cf48be736948fd5c894ce1943650db993b157c5e3d1d4d0df80/merged major:0 minor:292 fsType:overlay blockSize:0} overlay_0-295:{mountpoint:/var/lib/containers/storage/overlay/58f909d34990dafb4f72f74ed02b5c16002c775fdba4af2c9d3e80998269fbb2/merged major:0 minor:295 fsType:overlay blockSize:0} overlay_0-297:{mountpoint:/var/lib/containers/storage/overlay/97245a150e35511567d0450a18eee0f212c518b3ab71bf0c9f1b7340fdcbfa5d/merged major:0 minor:297 fsType:overlay blockSize:0} overlay_0-299:{mountpoint:/var/lib/containers/storage/overlay/768bca547e722d1841ab499bb61903494d16f2d40611c0b022b6555b84e04f6a/merged major:0 minor:299 fsType:overlay blockSize:0} overlay_0-301:{mountpoint:/var/lib/containers/storage/overlay/f9593cc36923c95204786679ca2c2aec6fbdf844972ef9081766704237c891b0/merged major:0 minor:301 fsType:overlay blockSize:0} overlay_0-303:{mountpoint:/var/lib/containers/storage/overlay/037b3e2043c8e92204d0771411e5aaa2799f8f6f7e47a29c6ee19eacb8ec50c7/merged major:0 minor:303 fsType:overlay blockSize:0} overlay_0-305:{mountpoint:/var/lib/containers/storage/overlay/34de7f109a1f2a314403c7c0bd583ffadf1c0b520d336b46a07ad865451c59ea/merged major:0 minor:305 fsType:overlay blockSize:0} overlay_0-307:{mountpoint:/var/lib/containers/storage/overlay/4cec077417d4f85f5797287a1f3c10e20f3e32f75613e8c1c27fa38326514d71/merged major:0 minor:307 fsType:overlay blockSize:0} overlay_0-310:{mountpoint:/var/lib/containers/storage/overlay/ca391c0538f8c4ea57d459ae502da6b3a196dce757ed588ee552deda346ae532/merged major:0 minor:310 fsType:overlay blockSize:0} overlay_0-312:{mountpoint:/var/lib/containers/storage/overlay/54a19d961238404f376c2606f185dc715017893db2223a03d1f710a9196abba8/merged major:0 minor:312 fsType:overlay blockSize:0} overlay_0-314:{mountpoint:/var/lib/containers/storage/overlay/b70231495361df06ea0db3e2eb3e4075b8779308a359b8a00cdce8af04d40624/merged major:0 minor:314 fsType:overlay blockSize:0} overlay_0-316:{mountpoint:/var/lib/containers/storage/overlay/4dca8458fe8246221597baf67321105b96cf92a4a5d0d4da8a94a0da2d1d24b3/merged major:0 minor:316 fsType:overlay blockSize:0} overlay_0-318:{mountpoint:/var/lib/containers/storage/overlay/d5fd33a4479b7135cc39fa46e14972aebdad57b396c737b0f6c4585f8b5a4699/merged major:0 minor:318 fsType:overlay blockSize:0} overlay_0-320:{mountpoint:/var/lib/containers/storage/overlay/353be6744b0a1663b0af23bef33d179c04fe03758ce89a849f7378308c16020f/merged major:0 minor:320 fsType:overlay blockSize:0} overlay_0-322:{mountpoint:/var/lib/containers/storage/overlay/604e639ff1a975901699df513f9497307c18c3e171e08aa4aca5f7bccb188586/merged major:0 minor:322 fsType:overlay blockSize:0} overlay_0-324:{mountpoint:/var/lib/containers/storage/overlay/bfd4b2e6cd2eb22324ccbc2dd67fdc0dc2b7bdbc33fc9f09676faa9aeae1f9c1/merged major:0 minor:324 fsType:overlay blockSize:0} overlay_0-343:{mountpoint:/var/lib/containers/storage/overlay/e5bc96d91baeecdf44df7a81238ae8644085e6d8c1a692d6e074c66dc5cf8ef6/merged major:0 minor:343 fsType:overlay blockSize:0} overlay_0-362:{mountpoint:/var/lib/containers/storage/overlay/6dab5acb733f0e585063bf9d2c8309fd984634ac7694220530f70b3ff13c1141/merged major:0 minor:362 fsType:overlay blockSize:0} overlay_0-367:{mountpoint:/var/lib/containers/storage/overlay/e609aa332ac1e7ff5240f9f5819d4a6836f963dcf3e64ed0d0273b846e6e047f/merged major:0 minor:367 fsType:overlay blockSize:0} overlay_0-372:{mountpoint:/var/lib/containers/storage/overlay/747e7898552179c60a268836b69488eba76b3ee027b11c21ad94c679c784e22a/merged major:0 minor:372 fsType:overlay blockSize:0} overlay_0-391:{mountpoint:/var/lib/containers/storage/overlay/14126353f617c487885084bb0b7593b4731d469b3f50fe7bd2732b293b490052/merged major:0 minor:391 fsType:overlay blockSize:0} overlay_0-407:{mountpoint:/var/lib/containers/storage/overlay/45dbb10fdc4b70be83aeed83e69e541ad2113b1964dc52fa2f45030a51ed77f2/merged major:0 minor:407 fsType:overlay blockSize:0} overlay_0-408:{mountpoint:/var/lib/containers/storage/overlay/1c3d938d9845daa19f7e6cd3cfd2a442b4da9e4d548f75055b2166d1c28dadb8/merged major:0 minor:408 fsType:overlay blockSize:0} overlay_0-41:{mountpoint:/var/lib/containers/storage/overlay/ae1d2c2f7884318fe08e8d62b667c493bc8fb98efc09bb4b66f84664c2885ce5/merged major:0 minor:41 fsType:overlay blockSize:0} overlay_0-416:{mountpoint:/var/lib/containers/storage/overlay/ec93076c5c89349daa30f1cd6c6bde60cdd86a2c7c41d66932e56c562d800f50/merged major:0 minor:416 fsType:overlay blockSize:0} overlay_0-420:{mountpoint:/var/lib/containers/storage/overlay/0b5fc3510724c8d3307cd1c64e817575f10fa27c4bc2227c8e853050bdcb2d6b/merged major:0 minor:420 fsType:overlay blockSize:0} overlay_0-431:{mountpoint:/var/lib/containers/storage/overlay/76aca97bf6788d17da65f9bbb59b833890b5e00ab400c37e69190bd342a713a9/merged major:0 minor:431 fsType:overlay blockSize:0} overlay_0-439:{mountpoint:/var/lib/containers/storage/overlay/5dff659c6ad2595575dae1077bceba496f8b8b5d21ab83f0ea09d2cec654834b/merged major:0 minor:439 fsType:overlay blockSize:0} overlay_0-44:{mountpoint:/var/lib/containers/storage/overlay/f19ea7eb37fb095b460da5074cfbcb6990beb5b98ac0d0e8487b50a2921d4f0e/merged major:0 minor:44 fsType:overlay blockSize:0} overlay_0-441:{mountpoint:/var/lib/containers/storage/overlay/ccbcb9c6aa63dd520bcacc279eb0a0449031818f2858952cd736e6fab7105061/merged major:0 minor:441 fsType:overlay blockSize:0} overlay_0-451:{mountpoint:/var/lib/containers/storage/overlay/accae5d814dd4e64740089127a9f5e9184b8e37a30b8efe50db9fd28de717845/merged major:0 minor:451 fsType:overlay blockSize:0} overlay_0-456:{mountpoint:/var/lib/containers/storage/overlay/4ea56cb3dbbb809b5b707e631f01a788f40eeae71fc227913865108e5555496e/merged major:0 minor:456 fsType:overlay blockSize:0} overlay_0-464:{mountpoint:/var/lib/containers/storage/overlay/3001c7f1309fc704ad0deab8e2d6cd91a280a472fd0c748fb8540f777a297b51/merged major:0 minor:464 fsType:overlay blockSize:0} overlay_0-477:{mountpoint:/var/lib/containers/storage/overlay/4fb3af67a6e9e07fe7caf947d3ec3c6260b3dd98d50a2aec7359eaf4afadc357/merged major:0 minor:477 fsType:overlay blockSize:0} overlay_0-479:{mountpoint:/var/lib/containers/storage/overlay/9e26f50eb30b0f56939e3bc5e5743821a2e1c8ccfc3c4f16cc6d05782cbe8290/merged major:0 minor:479 fsType:overlay blockSize:0} overlay_0-48:{mountpoint:/var/lib/containers/storage/overlay/6413e9d529c77e1a38a897df6b3e18a76b3455252fd09aa0ba1fb68405206901/merged major:0 minor:48 fsType:overlay blockSize:0} overlay_0-482:{mountpoint:/var/lib/containers/storage/overlay/38097152e7a4c2794fd6d9cdb4bfb58ed796ec5a3be242cc0005059cfe73277a/merged major:0 minor:482 fsType:overlay blockSize:0} overlay_0-483:{mountpoint:/var/lib/containers/storage/overlay/2090e491af9025aa7764a79363860bc56de0938a9211452a55f7a5cc2f9f6681/merged major:0 minor:483 fsType:overlay blockSize:0} overlay_0-485:{mountpoint:/var/lib/containers/storage/overlay/d89fedf9a3d955d50f2d025a41743b49b9c747395bea3f869d4d63b207d9dfef/merged major:0 minor:485 fsType:overlay blockSize:0} overlay_0-486:{mountpoint:/var/lib/containers/storage/overlay/c02468ab3debb9e84263c13f7dbff97bd6f10960e18c5574f4c9dd25a0a4e565/merged major:0 minor:486 fsType:overlay blockSize:0} overlay_0-493:{mountpoint:/var/lib/containers/storage/overlay/26fe0f94bb919702fdc75611611e2f2b887e3e91e6071dcc3617fb0c104bdc41/merged major:0 minor:493 fsType:overlay blockSize:0} overlay_0-494:{mountpoint:/var/lib/containers/storage/overlay/ce5f3915d9870327ef12592ec9938635b2e75c8a2f2f70b78d6e299970759b8a/merged major:0 minor:494 fsType:overlay blockSize:0} overlay_0-50:{mountpoint:/var/lib/containers/storage/overlay/27206c70d8cf5612b4bc562687829c2e7c5d3807798f6ca658a94a9136391f11/merged major:0 minor:50 fsType:overlay blockSize:0} overlay_0-501:{mountpoint:/var/lib/containers/storage/overlay/c1c90f59d73079f314a4547198539ad4b640d9cc583259e2546798d35b7ff7d8/merged major:0 minor:501 fsType:overlay blockSize:0} overlay_0-518:{mountpoint:/var/lib/containers/storage/overlay/4449e26c71f1a31eb89d52e7affbb761ee0e3f32f02dc0827999063df470545c/merged major:0 minor:518 fsType:overlay blockSize:0} overlay_0-52:{mountpoint:/var/lib/containers/storage/overlay/993dfceedf878bd044ff3764f028cbd697657d6177890e5c99a09115387c9b7f/merged major:0 minor:52 fsType:overlay blockSize:0} overlay_0-537:{mountpoint:/var/lib/containers/storage/overlay/3e8c2e6044ea58750f15babfd77aa2745ca8f680630e2572617a87a8e24eeffa/merged major:0 minor:537 fsType:overlay blockSize:0} overlay_0-538:{mountpoint:/var/lib/containers/storage/overlay/49c8be7c5787477801e76024cf714d577128c851386f2eb2723174591d1d66a8/merged major:0 minor:538 fsType:overlay blockSize:0} overlay_0-541:{mountpoint:/var/lib/containers/storage/overlay/e6bfce8c7f6ebb5c51d35cdd1f38223b6f17b5e8942de6a609dc6e0cb4c53dfb/merged major:0 minor:541 fsType:overlay blockSize:0} overlay_0-543:{mountpoint:/var/lib/containers/storage/overlay/852d262c13b3871465c75d7b1a88826aeb1163d92a939175f1c5b8f818a97e99/merged major:0 minor:543 fsType:overlay blockSize:0} overlay_0-545:{mountpoint:/var/lib/containers/storage/overlay/4eb296715127a2ea8f4063ee04c0daf1899b9bdc9320bd0f4bbaf821ea260d6e/merged major:0 minor:545 fsType:overlay blockSize:0} overlay_0-547:{mountpoint:/var/lib/containers/storage/overlay/fa559764ddadc8d8800a514c9bdfc5d0b81ec40833cd8e6fd69cd1e0c2fbdf13/merged major:0 minor:547 fsType:overlay blockSize:0} overlay_0-552:{mountpoint:/var/lib/containers/storage/overlay/da5d4c2a2f92cec2d24cd54cf1a1d48934c5a0c1d53fa318a2aa6c2f1cc8da9c/merged major:0 minor:552 fsType:overlay blockSize:0} overlay_0-554:{mountpoint:/var/lib/containers/storage/overlay/acedd6e73757cb39e6603a6a8a16e1450ae3ac947a99da5952a3ed3a61f14ff9/merged major:0 minor:554 fsType:overlay blockSize:0} overlay_0-556:{mountpoint:/var/lib/containers/storage/overlay/24ba459a1d7e400fa1d36b463f526c45730763a62b189cd8e253747066aba770/merged major:0 minor:556 fsType:overlay blockSize:0} overlay_0-56:{mountpoint:/var/lib/containers/storage/overlay/f7bf12743c6fea821b6eb8b168fdfd607cadf9a65f6d60d8ce11be2d7859eb7e/merged major:0 minor:56 fsType:overlay blockSize:0} overlay_0-577:{mountpoint:/var/lib/containers/storage/overlay/c5e70e14f02e5d15da78c51c2f352b787de841ddd2cf1dae0fb058e8730a4ecc/merged major:0 minor:577 fsType:overlay blockSize:0} overlay_0-579:{mountpoint:/var/lib/containers/storage/overlay/f16a56c5a4abea35c1647aef57899a54c220b03177bb4a802db082cd9e80655e/merged major:0 minor:579 fsType:overlay blockSize:0} overlay_0-581:{mountpoint:/var/lib/containers/storage/overlay/2bf89d8fd21cb1e2058926d1719509333a9b39ddfed0ac690b37a84c8ca7e656/merged major:0 minor:581 fsType:overlay blockSize:0} overlay_0-582:{mountpoint:/var/lib/containers/storage/overlay/a42174e1519362e79a87367118223726bd4df2019947ce77459031ff56c0d085/merged major:0 minor:582 fsType:overlay blockSize:0} overlay_0-586:{mountpoint:/var/lib/containers/storage/overlay/0b1fd3604f609961592287dde4caece9d1c18d1c932bc2b95a4b3d4b7b49ce16/merged major:0 minor:586 fsType:overlay blockSize:0} overlay_0-588:{mountpoint:/var/lib/containers/storage/overlay/9365702f6cd04ba2d19d46f24db448f00ba20f04224852e5f515f5969e12c9f1/merged major:0 minor:588 fsType:overlay blockSize:0} overlay_0-59:{mountpoint:/var/lib/containers/storage/overlay/dbb12722416d618a7b8d16ea3d6321ed473f6694b9f97ff7e9be7f65220ebd76/merged major:0 minor:59 fsType:overlay blockSize:0} overlay_0-605:{mountpoint:/var/lib/containers/storage/overlay/352d24a51a1e19e8e1beab77fbecdf3274f522170e6d9a92e9bebd2f01a62922/merged major:0 minor:605 fsType:overlay blockSize:0} overlay_0-607:{mountpoint:/var/lib/containers/storage/overlay/87e9a47155afedfb12844ae13474d1c42548a8cca17b8655d420db13f101a2f4/merged major:0 minor:607 fsType:overlay blockSize:0} overlay_0-62:{mountpoint:/var/lib/containers/storage/overlay/015d0bb78a0ed37131be5ba0e242a6d988033eeca0f5c3cfaa698b02172ade74/merged major:0 minor:62 fsType:overlay blockSize:0} overlay_0-622:{mountpoint:/var/lib/containers/storage/overlay/f0416faf350d1764bdafe437fd6779660a5f8aafbe339438646d87c887c05be9/merged major:0 minor:622 fsType:overlay blockSize:0} overlay_0-626:{mountpoint:/var/lib/containers/storage/overlay/3b003ca976af645c95a786c7b4aa6771de2dc2a33320ca7ed21f2996459b64ca/merged major:0 minor:626 fsType:overlay blockSize:0} overlay_0-629:{mountpoint:/var/lib/containers/storage/overlay/4915a54ae1d0e5e9c37b77016c759eab0af99f71fe2aa5c42819d15b2e1a72eb/merged major:0 minor:629 fsType:overlay blockSize:0} overlay_0-632:{mountpoint:/var/lib/containers/storage/overlay/830ab4fa810c2e4e7c5db5290d1d07f0e9551aa75532c1105e2ccd911a1a2b43/merged major:0 minor:632 fsType:overlay blockSize:0} overlay_0-64:{mountpoint:/var/lib/containers/storage/overlay/23c0e9e7db4aa6765d437d286013d21d2ddba4ff322855e69cb599fc743f0949/merged major:0 minor:64 fsType:overlay blockSize:0} overlay_0-651:{mountpoint:/var/lib/containers/storage/overlay/90e2f06cdf6cbd2f24dee7cea1474e5f6a2d7e6942941ac532a5756c1e0dd625/merged major:0 minor:651 fsType:overlay blockSize:0} overlay_0-655:{mountpoint:/var/lib/containers/storage/overlay/e5eb67e86ac0f2671f8dd361ec0ff28188063d96c31fa8a3d9ffc61d5dee4e35/merged major:0 minor:655 fsType:overlay blockSize:0} overlay_0-657:{mountpoint:/var/lib/containers/storage/overlay/d710f3a1015125cf64033c3a62ef8346e032469c46ada563c440e7cb26638a15/merged major:0 minor:657 fsType:overlay blockSize:0} overlay_0-659:{mountpoint:/var/lib/containers/storage/overlay/515a57d3113b9f0270f5af39ce3a860034639f2ac33ab362b586cee14ca007f2/merged major:0 minor:659 fsType:overlay blockSize:0} overlay_0-66:{mountpoint:/var/lib/containers/storage/overlay/15343f001b363054726b059622d2e30d91ca7850b2af842e9abd7770022cf25e/merged major:0 minor:66 fsType:overlay blockSize:0} overlay_0-665:{mountpoint:/var/lib/containers/storage/overlay/aed6cf6556917dfb8a80df66d56227dd97acf3c093b120528ad3a53d4ec3de4b/merged major:0 minor:665 fsType:overlay blockSize:0} overlay_0-666:{mountpoint:/var/lib/containers/storage/overlay/e0acd3b1fdd1181ca99463c147997431ed4ad13bf5b2c5d80b7032b9a52403c9/merged major:0 minor:666 fsType:overlay blockSize:0} overlay_0-67:{mountpoint:/var/lib/containers/storage/overlay/3012f9dea5600817270cfe46731df374f7c2e19543f8332bb43ea64df310bfc7/merged major:0 minor:67 fsType:overlay blockSize:0} overlay_0-672:{mountpoint:/var/lib/containers/storage/overlay/0dc9d6fa7eb56e32d403c6c44088f3ac41ddece3c3b29a6e92f51c4c6523ab83/merged major:0 minor:672 fsType:overlay blockSize:0} overlay_0-673:{mountpoint:/var/lib/containers/storage/overlay/834042d363c25d8684b765b609b55c69ea992293ff6be41fefe74defa8faaa09/merged major:0 minor:673 fsType:overlay blockSize:0} overlay_0-677:{mountpoint:/var/lib/containers/storage/overlay/17547968488bf33f19c2d81d16790c78a8d9f14bca86290537ea8c638e48a405/merged major:0 minor:677 fsType:overlay blockSize:0} overlay_0-679:{mountpoint:/var/lib/containers/storage/overlay/49a4117b05348de91db7503c98517071854306b10d51fb4646984dfd8582d235/merged major:0 minor:679 fsType:overlay blockSize:0} overlay_0-69:{mountpoint:/var/lib/containers/storage/overlay/89f801506da4764067f38c4ff1a3ec7ca9c0954e7377cbbdb62dcf0255d59139/merged major:0 minor:69 fsType:overlay blockSize:0} overlay_0-695:{mountpoint:/var/lib/containers/storage/overlay/d1b171a644c813820391042057b3e5026326a1cbc46becd56f6e9a1a39a186cb/merged major:0 minor:695 fsType:overlay blockSize:0} overlay_0-700:{mountpoint:/var/lib/containers/storage/overlay/b390210f3eb4e71f564e5d9f232af7289773fb630a1224f4360f342b27852964/merged major:0 minor:700 fsType:overlay blockSize:0} overlay_0-710:{mountpoint:/var/lib/containers/storage/overlay/cd0f990750fb57df675cb8e16efd34ce8f7f0b811c8b060e9683d2c9bc7a2558/merged major:0 minor:710 fsType:overlay blockSize:0} overlay_0-713:{mountpoint:/var/lib/containers/storage/overlay/867f21938908d33655f6cff24a911a75702044b91dc1c4425d3b54b0d42da296/merged major:0 minor:713 fsType:overlay blockSize:0} overlay_0-714:{mountpoint:/var/lib/containers/storage/overlay/39749012b267f798c516f4af513ea42ac96151cb259838050f01180ff13644bb/merged major:0 minor:714 fsType:overlay blockSize:0} overlay_0-719:{mountpoint:/var/lib/containers/storage/overlay/a18dc2defe26a9b67c478c5de30d3bb62da0400c985ab7986a49bf213b6302e4/merged major:0 minor:719 fsType:overlay blockSize:0} overlay_0-721:{mountpoint:/var/lib/containers/storage/overlay/72e6e8b637e9cd8cc633552aca3db79c1a38ae562f85fb8f0c5774459e0b4327/merged major:0 minor:721 fsType:overlay blockSize:0} overlay_0-722:{mountpoint:/var/lib/containers/storage/overlay/487330c9f65eecf207073afaa5f3ee2453e0f54e67353fb35c3f6e81f6374e30/merged major:0 minor:722 fsType:overlay blockSize:0} overlay_0-724:{mountpoint:/var/lib/containers/storage/overlay/b03535425dc701a97d029ec368d40a353a716e3797cac195343eef2bd1f40891/merged major:0 minor:724 fsType:overlay blockSize:0} overlay_0-730:{mountpoint:/var/lib/containers/storage/overlay/3f8307f20fc09cdb1b6f555522015c374b74e862f0c6c095959c969ff65c384e/merged major:0 minor:730 fsType:overlay blockSize:0} overlay_0-74:{mountpoint:/var/lib/containers/storage/overlay/1f22b362555ed8c88d9c895b52539c1601b0ca0cc5f1da51a7086f7d42d74487/merged major:0 minor:74 fsType:overlay blockSize:0} overlay_0-744:{mountpoint:/var/lib/containers/storage/overlay/b5b735b9e4f702c2b90fe3959b07d2b9760656494079012a520dba1625c78e03/merged major:0 minor:744 fsType:overlay blockSize:0} overlay_0-750:{mountpoint:/var/lib/containers/storage/overlay/997f095f1e897fe317c91035b81c5a5022c92dd02a2965f9b4a588a743feeaa4/merged major:0 minor:750 fsType:overlay blockSize:0} overlay_0-752:{mountpoint:/var/lib/containers/storage/overlay/42dfd71787aed7b75a04c8a6c78a3fe1e7a586dce731bf239979bbc2a65f1666/merged major:0 minor:752 fsType:overlay blockSize:0} overlay_0-754:{mountpoint:/var/lib/containers/storage/overlay/ed5cf7f918c4e5dd93bad9ec82ae6b0ac20da5d8b4b4106a09374e038dc65159/merged major:0 minor:754 fsType:overlay blockSize:0} overlay_0-759:{mountpoint:/var/lib/containers/storage/overlay/b803a6eb5721703b0bb35843e284f43357e0de451f6b36bd9db69b62f901473e/merged major:0 minor:759 fsType:overlay blockSize:0} overlay_0-786:{mountpoint:/var/lib/containers/storage/overlay/75274d61cf4935bc43b36b676374246f8529d14519cd92d0a650b731f7bcc8ba/merged major:0 minor:786 fsType:overlay blockSize:0} overlay_0-788:{mountpoint:/var/lib/containers/storage/overlay/adcf33511f190991da1311f92a22647a11719de3f88011174a9f5417b461e528/merged major:0 minor:788 fsType:overlay blockSize:0} overlay_0-789:{mountpoint:/var/lib/containers/storage/overlay/b9a20b2a16b65df5c41141e178ad55b9be838765078d42b6ff97a36f70d3bda1/merged major:0 minor:789 fsType:overlay blockSize:0} overlay_0-79:{mountpoint:/var/lib/containers/storage/overlay/0603ea239306a27657318092d69478adf10fe862bd0e53710618f8fc59941672/merged major:0 minor:79 fsType:overlay blockSize:0} overlay_0-798:{mountpoint:/var/lib/containers/storage/overlay/df24a96a36343789046ca7affe10c1bc390c9005528b84eafed0702c75bf6eb3/merged major:0 minor:798 fsType:overlay blockSize:0} overlay_0-800:{mountpoint:/var/lib/containers/storage/overlay/405cb433aa7200874bc9e56f381d99104364721e4fa68168f17c96d587cd6698/merged major:0 minor:800 fsType:overlay blockSize:0} overlay_0-82:{mountpoint:/var/lib/containers/storage/overlay/efd76861b1e4222962b6a6e13c6b79e7e6ca952a003e339bdce8e8653562e3e6/merged major:0 minor:82 fsType:overlay blockSize:0} overlay_0-822:{mountpoint:/var/lib/containers/storage/overlay/35ef40a6653449180df3525120572c23dddf9e1fb44a79925550af8103674c2f/merged major:0 minor:822 fsType:overlay blockSize:0} overlay_0-836:{mountpoint:/var/lib/containers/storage/overlay/86b7a0054ffc6b20cdba935b0ace5666d6928847667ac15a6e4a78dbeb90152d/merged major:0 minor:836 fsType:overlay blockSize:0} overlay_0-839:{mountpoint:/var/lib/containers/storage/overlay/872a97f30aeeaee2dded705249f9440c4c8e3d93e27fe815ceb729012022825b/merged major:0 minor:839 fsType:overlay blockSize:0} overlay_0-84:{mountpoint:/var/lib/containers/storage/overlay/1aa1f648c6e98e6e883e1e91cabf0303eb121bb2b8e139f155f8f9646b6f5b65/merged major:0 minor:84 fsType:overlay blockSize:0} overlay_0-850:{mountpoint:/var/lib/containers/storage/overlay/a2282948cf49ea68ffb692883cb9c0d533beb5b17344e258673a01e031955102/merged major:0 minor:850 fsType:overlay blockSize:0} overlay_0-853:{mountpoint:/var/lib/containers/storage/overlay/835452bf2b6e84695ebcec4185eb6fc1ceb1fc952aca3a86bb175f52bc10d8fd/merged major:0 minor:853 fsType:overlay blockSize:0} overlay_0-854:{mountpoint:/var/lib/containers/storage/overlay/0811d4362fd1e050df1448568db9778755675371f1323243c39ade020455f27c/merged major:0 minor:854 fsType:overlay blockSize:0} overlay_0-858:{mountpoint:/var/lib/containers/storage/overlay/24d2d85bdb49112de335b66eb31392e1a4edbf51f06671a746545b5cbf0f1752/merged major:0 minor:858 fsType:overlay blockSize:0} overlay_0-861:{mountpoint:/var/lib/containers/storage/overlay/2686c5a60c4544ac2841e44cb8c7600a7a5c747652247b38f002fe8409e08d22/merged major:0 minor:861 fsType:overlay blockSize:0} overlay_0-863:{mountpoint:/var/lib/containers/storage/overlay/b3c82e2fcd1329a3ff309fa450cfc6d695c573754e675d38dc3997b82bbad53e/merged major:0 minor:863 fsType:overlay blockSize:0} overlay_0-865:{mountpoint:/var/lib/containers/storage/overlay/662dc89a06d0d4b5f282e6000d8c2d1117139e611f4f7f45bfe5f1f9f8fd0e09/merged major:0 minor:865 fsType:overlay blockSize:0} overlay_0-867:{mountpoint:/var/lib/containers/storage/overlay/ef12e6e4a792661a0a1618aca3edfb84655fb702a0c01b0c7e7eb9b25a68df01/merged major:0 minor:867 fsType:overlay blockSize:0} overlay_0-877:{mountpoint:/var/lib/containers/storage/overlay/5127893d4180d55dbef8d9a6842d8cef5022f79d4a4b4d934d294ce7b37ffada/merged major:0 minor:877 fsType:overlay blockSize:0} overlay_0-88:{mountpoint:/var/lib/containers/storage/overlay/b42e4455860c47883bb170dbb10c8a69be0223451094ff57b2ddac12ff68b41a/merged major:0 minor:88 fsType:overlay blockSize:0} overlay_0-886:{mountpoint:/var/lib/containers/storage/overlay/6ece7828d050e9f69bcb753d13d901fb655a045fa5932a5e0abee2926b860769/merged major:0 minor:886 fsType:overlay blockSize:0} overlay_0-899:{mountpoint:/var/lib/containers/storage/overlay/77fa12cf3d78904202d1bbd734e55ad0b677c119c4af518cbcadd910da867873/merged major:0 minor:899 fsType:overlay blockSize:0} overlay_0-908:{mountpoint:/var/lib/containers/storage/overlay/5b8502ebdec4424fdb5d132894f97cc07b4abf4825e736a0af1e3015998b104b/merged major:0 minor:908 fsType:overlay blockSize:0} overlay_0-911:{mountpoint:/var/lib/containers/storage/overlay/fdcf8c990885ffdbed9f3c672ebbc50295cae6d35e569d92526ba4ae0093eb8c/merged major:0 minor:911 fsType:overlay blockSize:0} overlay_0-917:{mountpoint:/var/lib/containers/storage/overlay/4b0105a947ae64a174b6926ddf85c9b93214e1efe822c24cefa3efdc1faaa66a/merged major:0 minor:917 fsType:overlay blockSize:0} overlay_0-93:{mountpoint:/var/lib/containers/storage/overlay/4ecbd6c98ad1bf66c9822e395303aef81a5346039b912d020e4248269ba90af0/merged major:0 minor:93 fsType:overlay blockSize:0} overlay_0-933:{mountpoint:/var/lib/containers/storage/overlay/db7435d2fedec507bc74a5ca3bf3818018c31c3728e7161b64d382445df86ce6/merged major:0 minor:933 fsType:overlay blockSize:0} overlay_0-935:{mountpoint:/var/lib/containers/storage/overlay/31c6887b86a531fb26bb61ace34abe2155f48f7f9a984232f149d3c23dfdc6bd/merged major:0 minor:935 fsType:overlay blockSize:0} overlay_0-937:{mountpoint:/var/lib/containers/storage/overlay/355ef42e65ec7892751dee852b72ea43368167158070464cb023ec7cf5ba70f7/merged major:0 minor:937 fsType:overlay blockSize:0} overlay_0-938:{mountpoint:/var/lib/containers/storage/overlay/f86434cd5a4e1f760bb5a8e0e885c5b7ceaf3390c8b6ea1d88bdd04c04ad2fb0/merged major:0 minor:938 fsType:overlay blockSize:0} overlay_0-94:{mountpoint:/var/lib/containers/storage/overlay/6d70a8d09419542b77a3e82395cb2b41c588099232e45d8847afd012f6f18f4e/merged major:0 minor:94 fsType:overlay blockSize:0} overlay_0-945:{mountpoint:/var/lib/containers/storage/overlay/e50f7a8e9f499f8c410e2d2b78945e308fdd822978b724647f0608f998a83daa/merged major:0 minor:945 fsType:overlay blockSize:0} overlay_0-947:{mountpoint:/var/lib/containers/storage/overlay/5146ad7cf2d9a2e23e4b14f663cc9b0ebefa4c85119f6e2a5b34fb1cf6d604f1/merged major:0 minor:947 fsType:overlay blockSize:0} overlay_0-949:{mountpoint:/var/lib/containers/storage/overlay/85531c676116e8ba7b48a352e11ae3364c03ea1e116e127e524b77471d94e4a3/merged major:0 minor:949 fsType:overlay blockSize:0} overlay_0-951:{mountpoint:/var/lib/containers/storage/overlay/21b086d9f9d7d54902bd7e090acd8f5776250980e0062e7b8039d1bb73678995/merged major:0 minor:951 fsType:overlay blockSize:0} overlay_0-969:{mountpoint:/var/lib/containers/storage/overlay/39b6e33a985281d4c11025cb6288b6b75453b3b6a64e60e2c19fdac85f8f6ec1/merged major:0 minor:969 fsType:overlay blockSize:0} overlay_0-971:{mountpoint:/var/lib/containers/storage/overlay/9c78a895e6d516ae5b6e41a019c32986f1a676c76059b79c8228da8f604e126f/merged major:0 minor:971 fsType:overlay blockSize:0} overlay_0-979:{mountpoint:/var/lib/containers/storage/overlay/3bde6aa972f1acbd8b914b54bf077bb5a72dea6433080135b14554c2ceeee194/merged major:0 minor:979 fsType:overlay blockSize:0} overlay_0-983:{mountpoint:/var/lib/containers/storage/overlay/b62bd6d87f947af949cc85e9650d605b6b6949ee0d8bb6b03a9afce75a4b502d/merged major:0 minor:983 fsType:overlay blockSize:0} overlay_0-985:{mountpoint:/var/lib/containers/storage/overlay/7dfa76294934455b3c3ce809862523e9b654626fffb36430d43ccb323cc2102c/merged major:0 minor:985 fsType:overlay blockSize:0} overlay_0-988:{mountpoint:/var/lib/containers/storage/overlay/dcbafdd9619100220a1ff7c2fc9a947fb34c29a7f0e0ae2445fd857c9d1bd4b8/merged major:0 minor:988 fsType:overlay blockSize:0} overlay_0-994:{mountpoint:/var/lib/containers/storage/overlay/18692130547ef54464a8badcd7af31eddd9f3ce5161cd4e592cfa8b805722e8e/merged major:0 minor:994 fsType:overlay blockSize:0} overlay_0-996:{mountpoint:/var/lib/containers/storage/overlay/6abd34bfc203a2d9e3c4bc50ec2939ac1fc23eb9d9582144795f616f2a80e7dc/merged major:0 minor:996 fsType:overlay blockSize:0}] Mar 13 01:17:32.202766 master-0 kubenswrapper[19803]: I0313 01:17:32.201226 19803 manager.go:217] Machine: {Timestamp:2026-03-13 01:17:32.200388244 +0000 UTC m=+0.165535933 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654132736 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:3a0a52883c534d178c5b12dafb817e60 SystemUUID:3a0a5288-3c53-4d17-8c5b-12dafb817e60 BootID:b5890e11-c274-4f10-a685-d6fee1e9f87f Filesystems:[{Device:overlay_0-483 DeviceMajor:0 DeviceMinor:483 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c55a215a-9a95-4f48-8668-9b76503c3044/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:856 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-407 DeviceMajor:0 DeviceMinor:407 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/35dc923311215b12bc6926327888353ee4dac03edf2bd01fd1709920b747d038/userdata/shm DeviceMajor:0 DeviceMinor:119 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~secret/ovn-control-plane-metrics-cert DeviceMajor:0 DeviceMinor:124 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:215 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-312 DeviceMajor:0 DeviceMinor:312 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-59 DeviceMajor:0 DeviceMinor:59 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~projected/kube-api-access-4rg4g DeviceMajor:0 DeviceMinor:227 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0/volumes/kubernetes.io~projected/kube-api-access-98t7n DeviceMajor:0 DeviceMinor:641 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-441 DeviceMajor:0 DeviceMinor:441 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-552 DeviceMajor:0 DeviceMinor:552 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-695 DeviceMajor:0 DeviceMinor:695 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-367 DeviceMajor:0 DeviceMinor:367 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1023 DeviceMajor:0 DeviceMinor:1023 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-160 DeviceMajor:0 DeviceMinor:160 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/kube-api-access-fhk76 DeviceMajor:0 DeviceMinor:224 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/kube-api-access-zpdjh DeviceMajor:0 DeviceMinor:240 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-588 DeviceMajor:0 DeviceMinor:588 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:611 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-714 DeviceMajor:0 DeviceMinor:714 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-408 DeviceMajor:0 DeviceMinor:408 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0ff72b58-aca9-46f1-86ca-da8339734ac9/volumes/kubernetes.io~secret/tls-certificates DeviceMajor:0 DeviceMinor:1035 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~projected/kube-api-access-smhrl DeviceMajor:0 DeviceMinor:229 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-287 DeviceMajor:0 DeviceMinor:287 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/d89b5d71-5522-433e-a0bb-f2767332e744/volumes/kubernetes.io~secret/signing-key DeviceMajor:0 DeviceMinor:429 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/volumes/kubernetes.io~projected/kube-api-access-jvrdt DeviceMajor:0 DeviceMinor:419 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-917 DeviceMajor:0 DeviceMinor:917 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-979 DeviceMajor:0 DeviceMinor:979 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7045bd9f4a827f56cb7bd9e063ae71240fc184218e9ad8e94a5fef4b4d176a48/userdata/shm DeviceMajor:0 DeviceMinor:1036 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/96b67a99-eada-44d7-93eb-cc3ced777fc6/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:214 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-267 DeviceMajor:0 DeviceMinor:267 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~projected/kube-api-access-9npsh DeviceMajor:0 DeviceMinor:811 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-64 DeviceMajor:0 DeviceMinor:64 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-677 DeviceMajor:0 DeviceMinor:677 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f26f2fe408a83b7887b45acd945c90cef651bf2e6e61b90316af3ed0a1cd741e/userdata/shm DeviceMajor:0 DeviceMinor:241 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/de825527d944f688f2acf2625cf8789a7117e73fdf8ca84b446d4e5ce667dc74/userdata/shm DeviceMajor:0 DeviceMinor:245 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ee1fa592b43fd04f438a18672ba5cbe2212eefd748a0d3d95e70d1fbb463e36/userdata/shm DeviceMajor:0 DeviceMinor:275 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9/userdata/shm DeviceMajor:0 DeviceMinor:81 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dbcb4b80-425a-4dd5-93a8-bb462f641ef1/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:85 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-93 DeviceMajor:0 DeviceMinor:93 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846/userdata/shm DeviceMajor:0 DeviceMinor:54 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-132 DeviceMajor:0 DeviceMinor:132 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~secret/node-tuning-operator-tls DeviceMajor:0 DeviceMinor:459 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~empty-dir/etc-tuned DeviceMajor:0 DeviceMinor:603 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:675 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/65dd1dc7-1b90-40f6-82c9-dee90a1fa852/volumes/kubernetes.io~secret/cloud-credential-operator-serving-cert DeviceMajor:0 DeviceMinor:601 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-62 DeviceMajor:0 DeviceMinor:62 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-111 DeviceMajor:0 DeviceMinor:111 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:220 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d285e2cd3ad810bbe2e32e2bf486a60f25f240f9aaa8797930d7581cb9051bc3/userdata/shm DeviceMajor:0 DeviceMinor:520 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5cdd48b8a2071aa3abf6b5c8005e72c1dbb38aa6a21e58f6cbdd8c251468cb41/userdata/shm DeviceMajor:0 DeviceMinor:524 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:91 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/2581e5b5-8cbb-4fa5-9888-98fb572a6232/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:824 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-858 DeviceMajor:0 DeviceMinor:858 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~projected/kube-api-access-pj7cp DeviceMajor:0 DeviceMinor:123 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:222 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~projected/kube-api-access-nd8dv DeviceMajor:0 DeviceMinor:663 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-700 DeviceMajor:0 DeviceMinor:700 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/80eb89dc-ccfc-4360-811a-82a3ef6f7b65/volumes/kubernetes.io~secret/cloud-controller-manager-operator-tls DeviceMajor:0 DeviceMinor:92 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-74 DeviceMajor:0 DeviceMinor:74 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a/volumes/kubernetes.io~secret/cluster-olm-operator-serving-cert DeviceMajor:0 DeviceMinor:217 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cdd0c71504e94f6dcb39dab229fb181eeb5ab28f2092fb5e419d885709d3d1ae/userdata/shm DeviceMajor:0 DeviceMinor:448 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/c598fb9b925a609d9065bd53d80c03d631ad5c318188796c910960611dc611f4/userdata/shm DeviceMajor:0 DeviceMinor:523 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-545 DeviceMajor:0 DeviceMinor:545 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-316 DeviceMajor:0 DeviceMinor:316 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7d990bd61a1e1a51f37259ece6d1d14af9e817f84717aafe9cfddf2f1cc1af71/userdata/shm DeviceMajor:0 DeviceMinor:793 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-786 DeviceMajor:0 DeviceMinor:786 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-788 DeviceMajor:0 DeviceMinor:788 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-171 DeviceMajor:0 DeviceMinor:171 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-184 DeviceMajor:0 DeviceMinor:184 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3678d76d6368f04d7424fd0ae731dc627699ae26c8d8180a738d9913435c9819/userdata/shm DeviceMajor:0 DeviceMinor:263 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-947 DeviceMajor:0 DeviceMinor:947 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-850 DeviceMajor:0 DeviceMinor:850 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/4ca67e8bef4478f002e4442f5b186c7d786535b25d6573f50f3d477a22f7f668/userdata/shm DeviceMajor:0 DeviceMinor:791 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-983 DeviceMajor:0 DeviceMinor:983 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-299 DeviceMajor:0 DeviceMinor:299 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e799871-735a-44e8-8193-24c5bb388928/volumes/kubernetes.io~projected/kube-api-access-jthxn DeviceMajor:0 DeviceMinor:829 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-318 DeviceMajor:0 DeviceMinor:318 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-789 DeviceMajor:0 DeviceMinor:789 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-362 DeviceMajor:0 DeviceMinor:362 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-482 DeviceMajor:0 DeviceMinor:482 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-854 DeviceMajor:0 DeviceMinor:854 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:138 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/69da0e58-2ae6-4d4b-b125-77e93df3d660/volumes/kubernetes.io~projected/kube-api-access-pzxv5 DeviceMajor:0 DeviceMinor:248 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-657 DeviceMajor:0 DeviceMinor:657 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/19d9989080bb99254df4633b984ed6ac361fb3f67806322eddb375cdee316de2/userdata/shm DeviceMajor:0 DeviceMinor:251 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~projected/kube-api-access-lz8ww DeviceMajor:0 DeviceMinor:615 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/581ff17d-f121-4ece-8e45-81f1f710d163/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:411 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~secret/cluster-baremetal-operator-tls DeviceMajor:0 DeviceMinor:810 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-839 DeviceMajor:0 DeviceMinor:839 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-938 DeviceMajor:0 DeviceMinor:938 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1047 DeviceMajor:0 DeviceMinor:1047 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-150 DeviceMajor:0 DeviceMinor:150 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:237 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~projected/kube-api-access-rrvhw DeviceMajor:0 DeviceMinor:256 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-730 DeviceMajor:0 DeviceMinor:730 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/21110b48-25fc-434a-b156-7f6bd6064bed/volumes/kubernetes.io~secret/cert DeviceMajor:0 DeviceMinor:642 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-41 DeviceMajor:0 DeviceMinor:41 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-945 DeviceMajor:0 DeviceMinor:945 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/28719caedf8b1f4ed31a1dd696057fe3b52449ba6c0d76bcf9bc027a93b14830/userdata/shm DeviceMajor:0 DeviceMinor:891 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volume-subpaths/run-systemd/ovnkube-controller/6 DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:overlay_0-479 DeviceMajor:0 DeviceMinor:479 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:647 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:1034 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-853 DeviceMajor:0 DeviceMinor:853 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-971 DeviceMajor:0 DeviceMinor:971 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59/userdata/shm DeviceMajor:0 DeviceMinor:46 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-556 DeviceMajor:0 DeviceMinor:556 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-605 DeviceMajor:0 DeviceMinor:605 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/bd264af8-4ced-40c4-b4f6-202bab42d0cb/volumes/kubernetes.io~projected/kube-api-access-xcf2h DeviceMajor:0 DeviceMinor:610 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-44 DeviceMajor:0 DeviceMinor:44 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa/userdata/shm DeviceMajor:0 DeviceMinor:423 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/21da7cd9c215e50e56d0756a974eda56d485e36242a9ade62bb96f7d9a66d36e/userdata/shm DeviceMajor:0 DeviceMinor:834 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d5e008bf9f6b695cb5f727240a0c351d82558f527dcc2602815400da2d730f6/userdata/shm DeviceMajor:0 DeviceMinor:534 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-665 DeviceMajor:0 DeviceMinor:665 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~projected/kube-api-access-fz9qf DeviceMajor:0 DeviceMinor:225 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-554 DeviceMajor:0 DeviceMinor:554 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/56e20b21-ba17-46ae-a740-0e7bd45eae5f/volumes/kubernetes.io~projected/kube-api-access-g89p7 DeviceMajor:0 DeviceMinor:812 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-933 DeviceMajor:0 DeviceMinor:933 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-985 DeviceMajor:0 DeviceMinor:985 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9ed0f2af24dce87330ff074848aa9e663492193136113ddae19217ced58912fa/userdata/shm DeviceMajor:0 DeviceMinor:469 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fb5dee36-70a4-47a4-afc2-d3209a476362/volumes/kubernetes.io~projected/kube-api-access-mvckz DeviceMajor:0 DeviceMinor:650 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/default-certificate DeviceMajor:0 DeviceMinor:1033 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-416 DeviceMajor:0 DeviceMinor:416 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~projected/kube-api-access-2nbvg DeviceMajor:0 DeviceMinor:228 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-324 DeviceMajor:0 DeviceMinor:324 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-673 DeviceMajor:0 DeviceMinor:673 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5fc26918eff78c25b88ab7c1476de02488bb5aaefb35f371b1d5f4a9fb66fe67/userdata/shm DeviceMajor:0 DeviceMinor:921 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d163333f-fda5-4067-ad7c-6f646ae411c8/volumes/kubernetes.io~projected/kube-api-access-v2jgj DeviceMajor:0 DeviceMinor:233 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/91fc568a-61ad-400e-a54e-21d62e51bb17/volumes/kubernetes.io~secret/image-registry-operator-tls DeviceMajor:0 DeviceMinor:463 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:612 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-486 DeviceMajor:0 DeviceMinor:486 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-632 DeviceMajor:0 DeviceMinor:632 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1049 DeviceMajor:0 DeviceMinor:1049 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-94 DeviceMajor:0 DeviceMinor:94 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-156 DeviceMajor:0 DeviceMinor:156 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-477 DeviceMajor:0 DeviceMinor:477 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-464 DeviceMajor:0 DeviceMinor:464 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-988 DeviceMajor:0 DeviceMinor:988 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-121 DeviceMajor:0 DeviceMinor:121 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-607 DeviceMajor:0 DeviceMinor:607 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-710 DeviceMajor:0 DeviceMinor:710 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~secret/webhook-cert DeviceMajor:0 DeviceMinor:585 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-951 DeviceMajor:0 DeviceMinor:951 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:213 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:515 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-582 DeviceMajor:0 DeviceMinor:582 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-744 DeviceMajor:0 DeviceMinor:744 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:43 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/bc8f7d43b71dfb70df609090acace3d9c40c52d842b2f9e449644f3b06944eff/userdata/shm DeviceMajor:0 DeviceMinor:104 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-655 DeviceMajor:0 DeviceMinor:655 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-279 DeviceMajor:0 DeviceMinor:279 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-305 DeviceMajor:0 DeviceMinor:305 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/34889110-f282-4c2c-a2b0-620033559e1b/volumes/kubernetes.io~projected/kube-api-access-tlgsr DeviceMajor:0 DeviceMinor:410 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/72c7baf13da514fc8287177e18c17708037dccda828bfe98993c839421246be0/userdata/shm DeviceMajor:0 DeviceMinor:409 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:613 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-136 DeviceMajor:0 DeviceMinor:136 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~secret/ovn-node-metrics-cert DeviceMajor:0 DeviceMinor:126 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:219 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-307 DeviceMajor:0 DeviceMinor:307 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2760a216-fd4b-46d9-a4ec-2d3285ec02bd/volumes/kubernetes.io~projected/kube-api-access-4lqgs DeviceMajor:0 DeviceMinor:596 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7e938267-de1f-46f7-bf78-b0b3e810c4fa/volumes/kubernetes.io~projected/kube-api-access-kvmpk DeviceMajor:0 DeviceMinor:954 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:247 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f771149b-9d62-408e-be6f-72f575b1ec42/volumes/kubernetes.io~projected/kube-api-access-qmr7z DeviceMajor:0 DeviceMinor:443 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/volumes/kubernetes.io~projected/kube-api-access-5zzqj DeviceMajor:0 DeviceMinor:450 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f90e074f8ab2848261d7ebd8ff2e240e768ffdf256ecd5a6670700d24212e960/userdata/shm DeviceMajor:0 DeviceMinor:807 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-863 DeviceMajor:0 DeviceMinor:863 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/09550970a5450b6b18862ef0c3ad02b9ed34a2674a41f1a5f7113f8a2249dc19/userdata/shm DeviceMajor:0 DeviceMinor:129 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-152 DeviceMajor:0 DeviceMinor:152 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~projected/kube-api-access-4b8jr DeviceMajor:0 DeviceMinor:234 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9863f7ff-4c8d-42a3-a822-01697cf9c920/volumes/kubernetes.io~projected/kube-api-access-44dmt DeviceMajor:0 DeviceMinor:801 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-886 DeviceMajor:0 DeviceMinor:886 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-996 DeviceMajor:0 DeviceMinor:996 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1078 DeviceMajor:0 DeviceMinor:1078 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-518 DeviceMajor:0 DeviceMinor:518 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6e799871-735a-44e8-8193-24c5bb388928/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:828 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-372 DeviceMajor:0 DeviceMinor:372 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-722 DeviceMajor:0 DeviceMinor:722 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-713 DeviceMajor:0 DeviceMinor:713 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/458ff1fddfd5f3b95a485a4b0cb8e88a31c5825a6f8733cb5141f441c672f2be/userdata/shm DeviceMajor:0 DeviceMinor:142 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-289 DeviceMajor:0 DeviceMinor:289 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-420 DeviceMajor:0 DeviceMinor:420 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~projected/kube-api-access-jc8xs DeviceMajor:0 DeviceMinor:230 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~secret/apiservice-cert DeviceMajor:0 DeviceMinor:461 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-586 DeviceMajor:0 DeviceMinor:586 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/075e91dd63c1e740e494eccc3ead8f62731d857d106f25bfcfaa922018525117/userdata/shm DeviceMajor:0 DeviceMinor:619 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-310 DeviceMajor:0 DeviceMinor:310 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/24dc2549a8ac6f39dd6f57c57f717e50a501dd15d60d7e2a80b78b592b931b48/userdata/shm DeviceMajor:0 DeviceMinor:114 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/volumes/kubernetes.io~projected/kube-api-access-2dlx5 DeviceMajor:0 DeviceMinor:118 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-204 DeviceMajor:0 DeviceMinor:204 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~secret/stats-auth DeviceMajor:0 DeviceMinor:1032 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1045 DeviceMajor:0 DeviceMinor:1045 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-134 DeviceMajor:0 DeviceMinor:134 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/343ebb9e9f7133e28dc8b97a72067095722cd38fc5a1cd6bd72819c24b19f9a4/userdata/shm DeviceMajor:0 DeviceMinor:269 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-283 DeviceMajor:0 DeviceMinor:283 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65ef9aae-25a5-46c6-adf3-634f8f7a29bc/volumes/kubernetes.io~projected/kube-api-access-psvcz DeviceMajor:0 DeviceMinor:827 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-320 DeviceMajor:0 DeviceMinor:320 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/be6c496962a8987f21c42524b12c5d8025b66ff294e50520947b2cd7bb0af865/userdata/shm DeviceMajor:0 DeviceMinor:967 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1012 DeviceMajor:0 DeviceMinor:1012 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/0caabde8-d49a-431d-afe5-8b283188c11c/volumes/kubernetes.io~projected/kube-api-access-vccjz DeviceMajor:0 DeviceMinor:1038 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:overlay_0-297 DeviceMajor:0 DeviceMinor:297 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60de85ba97afadaf001b2cf07b2675a887f7f03299ff0b0c7cf2b1b3a76b1ac0/userdata/shm DeviceMajor:0 DeviceMinor:453 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-431 DeviceMajor:0 DeviceMinor:431 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-56 DeviceMajor:0 DeviceMinor:56 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2098d43302ad0e00931b30fb0473a362fee9e9000b89c27552d72a632e47afbd/userdata/shm DeviceMajor:0 DeviceMinor:264 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6923888f2474b2621a6d1f7b4784be73fc6d36844a46c111dbeb08c776fa9c52/userdata/shm DeviceMajor:0 DeviceMinor:273 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/95c7493b-ad9d-490e-83f3-aa28750b2b5e/volumes/kubernetes.io~projected/kube-api-access-wds6q DeviceMajor:0 DeviceMinor:614 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-629 DeviceMajor:0 DeviceMinor:629 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2581e5b5-8cbb-4fa5-9888-98fb572a6232/volumes/kubernetes.io~projected/kube-api-access-gh7ks DeviceMajor:0 DeviceMinor:825 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827068416 Type:vfs Inodes:1048576 HasInodes:true} {Device:/var/lib/kubelet/pods/46015913-c499-49b1-a9f6-a61c6e96b13f/volumes/kubernetes.io~secret/cluster-monitoring-operator-tls DeviceMajor:0 DeviceMinor:513 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/658f47ce3c2ae2a79030288ee1e25fc5980adee4919ddd23b5841d0fa0c0c0bb/userdata/shm DeviceMajor:0 DeviceMinor:521 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0/volumes/kubernetes.io~secret/samples-operator-tls DeviceMajor:0 DeviceMinor:640 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-66 DeviceMajor:0 DeviceMinor:66 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-106 DeviceMajor:0 DeviceMinor:106 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~empty-dir/tmp DeviceMajor:0 DeviceMinor:604 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:476 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-537 DeviceMajor:0 DeviceMinor:537 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/65392793bd94fcb00daa4e5e0befa1cdc4621ed4d78484330a8ebe817e639598/userdata/shm DeviceMajor:0 DeviceMinor:559 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-865 DeviceMajor:0 DeviceMinor:865 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d525ab3b1b5859648620b47f3759af91f036909616f6c49b660fe4a797d2c3f0/userdata/shm DeviceMajor:0 DeviceMinor:893 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/80eb89dc-ccfc-4360-811a-82a3ef6f7b65/volumes/kubernetes.io~projected/kube-api-access-t7wld DeviceMajor:0 DeviceMinor:643 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:216 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~projected/kube-api-access-k5gc8 DeviceMajor:0 DeviceMinor:221 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/74efa52b-fd97-418a-9a44-914442633f74/volumes/kubernetes.io~projected/kube-api-access-8jkzq DeviceMajor:0 DeviceMinor:231 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1063 DeviceMajor:0 DeviceMinor:1063 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1066 DeviceMajor:0 DeviceMinor:1066 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-50 DeviceMajor:0 DeviceMinor:50 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/37840cae91bb38842e33e47936d655dcd095da55d1359acc8622a63bc2e2f08c/userdata/shm DeviceMajor:0 DeviceMinor:254 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/29c3e604fa02f6812f7b745e1345b811004751bfdbd70448e21ada412112c94f/userdata/shm DeviceMajor:0 DeviceMinor:628 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-935 DeviceMajor:0 DeviceMinor:935 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-899 DeviceMajor:0 DeviceMinor:899 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-911 DeviceMajor:0 DeviceMinor:911 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/7e938267-de1f-46f7-bf78-b0b3e810c4fa/volumes/kubernetes.io~secret/machine-approver-tls DeviceMajor:0 DeviceMinor:953 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-189 DeviceMajor:0 DeviceMinor:189 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b/volumes/kubernetes.io~projected/kube-api-access-pqfj5 DeviceMajor:0 DeviceMinor:252 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/encryption-config DeviceMajor:0 DeviceMinor:645 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/60159a917a34f7d64b3ba3a186dff388b89b7011483106eb857811a35e9e0fbb/userdata/shm DeviceMajor:0 DeviceMinor:86 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2f532f863189cb40165138dbb4b485ec37ab7ca8ad6591b3d559de34664f9afe/userdata/shm DeviceMajor:0 DeviceMinor:992 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1014 DeviceMajor:0 DeviceMinor:1014 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-485 DeviceMajor:0 DeviceMinor:485 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/95c7493b-ad9d-490e-83f3-aa28750b2b5e/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:621 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-752 DeviceMajor:0 DeviceMinor:752 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-672 DeviceMajor:0 DeviceMinor:672 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-547 DeviceMajor:0 DeviceMinor:547 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-666 DeviceMajor:0 DeviceMinor:666 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-292 DeviceMajor:0 DeviceMinor:292 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-651 DeviceMajor:0 DeviceMinor:651 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-800 DeviceMajor:0 DeviceMinor:800 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/volumes/kubernetes.io~projected/kube-api-access-5xmqc DeviceMajor:0 DeviceMinor:99 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/2697e850ca89be32459183985b3f9fee84b93466b86c6d103ecf18157fa8b712/userdata/shm DeviceMajor:0 DeviceMinor:260 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/6ad2904e-ece9-4d72-8683-c3e691e07497/volumes/kubernetes.io~secret/srv-cert DeviceMajor:0 DeviceMinor:514 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-456 DeviceMajor:0 DeviceMinor:456 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/061fc67620de1b52747445ea534c41ab6513f37b1f03a4e68b4308398d499797/userdata/shm DeviceMajor:0 DeviceMinor:522 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-754 DeviceMajor:0 DeviceMinor:754 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-759 DeviceMajor:0 DeviceMinor:759 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-322 DeviceMajor:0 DeviceMinor:322 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fbfc2caf-126e-41b9-9b31-05f7a45d8536/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:218 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/da609cd6cbb5b9e771ac633c351aa8997603432a2f5300b5aa8eef97f27120bb/userdata/shm DeviceMajor:0 DeviceMinor:271 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d89b5d71-5522-433e-a0bb-f2767332e744/volumes/kubernetes.io~projected/kube-api-access-lmnh2 DeviceMajor:0 DeviceMinor:430 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-994 DeviceMajor:0 DeviceMinor:994 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-493 DeviceMajor:0 DeviceMinor:493 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~secret/package-server-manager-serving-cert DeviceMajor:0 DeviceMinor:512 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/770fca1b39851d439e2eba8f53f5e8c6629f240ddb04931d7537be93916cfc27/userdata/shm DeviceMajor:0 DeviceMinor:526 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-724 DeviceMajor:0 DeviceMinor:724 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-798 DeviceMajor:0 DeviceMinor:798 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:232 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-277 DeviceMajor:0 DeviceMinor:277 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-88 DeviceMajor:0 DeviceMinor:88 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1061 DeviceMajor:0 DeviceMinor:1061 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-501 DeviceMajor:0 DeviceMinor:501 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6da2aac0-42a0-45c2-93ec-b148f5889e8b/volumes/kubernetes.io~projected/kube-api-access-9rtds DeviceMajor:0 DeviceMinor:816 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/33bf722525c772142ec0cd09e0392bf59b78686977bb452929548b6bc04bfae5/userdata/shm DeviceMajor:0 DeviceMinor:892 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-1005 DeviceMajor:0 DeviceMinor:1005 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c687237e-50e5-405d-8fef-0efbc3866630/volumes/kubernetes.io~projected/kube-api-access-txxbg DeviceMajor:0 DeviceMinor:139 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-285 DeviceMajor:0 DeviceMinor:285 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8ad2a6d5-6edf-4840-89f9-47847c8dac05/volumes/kubernetes.io~secret/marketplace-operator-metrics DeviceMajor:0 DeviceMinor:511 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~secret/webhook-certs DeviceMajor:0 DeviceMinor:517 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/234a363db8b880e78d95679d40d82c251d6d6e0dfd2a1cd27b2a2de32ddb7344/userdata/shm DeviceMajor:0 DeviceMinor:624 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:646 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/50cd4dbba0595bc95bd8379d7cfd780825252615fdd5f10e3bb402ec0d1d10ce/userdata/shm DeviceMajor:0 DeviceMinor:652 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1ea0ea4e5eed6b85ccc36c4c8c0dc8b3b9419340ae19c9233bb9409a6a59c6b0/userdata/shm DeviceMajor:0 DeviceMinor:535 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730829824 Type:vfs Inodes:819200 HasInodes:true} {Device:/var/lib/kubelet/pods/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/volumes/kubernetes.io~projected/kube-api-access-98t5h DeviceMajor:0 DeviceMinor:250 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-295 DeviceMajor:0 DeviceMinor:295 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-867 DeviceMajor:0 DeviceMinor:867 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-908 DeviceMajor:0 DeviceMinor:908 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:475 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b61dc113f1a4bef80c641546e2474c72c189dd507d27eb4f40039500f234ba15/userdata/shm DeviceMajor:0 DeviceMinor:772 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6d6b932de4337ed7b1b29feb31dfecf2b00d8a0c27165dce010504a3cf2e5f0a/userdata/shm DeviceMajor:0 DeviceMinor:609 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/de46c12a-aa3e-442e-bcc4-365d05f50103/volumes/kubernetes.io~projected/kube-api-access-sjkgv DeviceMajor:0 DeviceMinor:101 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~projected/bound-sa-token DeviceMajor:0 DeviceMinor:238 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/volumes/kubernetes.io~projected/kube-api-acc Mar 13 01:17:32.203411 master-0 kubenswrapper[19803]: ess-nbcg4 DeviceMajor:0 DeviceMinor:694 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-314 DeviceMajor:0 DeviceMinor:314 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-949 DeviceMajor:0 DeviceMinor:949 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7/userdata/shm DeviceMajor:0 DeviceMinor:61 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e/userdata/shm DeviceMajor:0 DeviceMinor:97 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-194 DeviceMajor:0 DeviceMinor:194 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/7477b641a786f084712c4f118bc6505bfe95f699f9d24590d99cd384fbe82b5c/userdata/shm DeviceMajor:0 DeviceMinor:257 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/b055cbc200ec047aacb638d82e675e244c203df858dcd01394edc1e4bc014d9f/userdata/shm DeviceMajor:0 DeviceMinor:467 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-969 DeviceMajor:0 DeviceMinor:969 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/c55a215a-9a95-4f48-8668-9b76503c3044/volumes/kubernetes.io~projected/kube-api-access-g8n5d DeviceMajor:0 DeviceMinor:857 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/032c2b20f604f0aca4515b1e3c70d1cee6305981fa2fc0ade62b27cbdcf9dd58/userdata/shm DeviceMajor:0 DeviceMinor:1025 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-622 DeviceMajor:0 DeviceMinor:622 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/56e20b21-ba17-46ae-a740-0e7bd45eae5f/volumes/kubernetes.io~secret/control-plane-machine-set-operator-tls DeviceMajor:0 DeviceMinor:818 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d243e098a2bf2092df86880b77adaed46c59e61e072be24c44913d8532c87256/userdata/shm DeviceMajor:0 DeviceMinor:98 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/3418d0fb-d0ae-4634-a645-dc387a19147f/volumes/kubernetes.io~projected/kube-api-access-tdpt2 DeviceMajor:0 DeviceMinor:986 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-79 DeviceMajor:0 DeviceMinor:79 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896/userdata/shm DeviceMajor:0 DeviceMinor:414 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/5cba1e5f698e98df3c15a1fd7c6d0586c623f3939d642ba858d361854e19b48c/userdata/shm DeviceMajor:0 DeviceMinor:468 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/81835d51-a414-440f-889b-690561e98d6a/volumes/kubernetes.io~secret/catalogserver-certs DeviceMajor:0 DeviceMinor:676 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-836 DeviceMajor:0 DeviceMinor:836 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-541 DeviceMajor:0 DeviceMinor:541 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/3418d0fb-d0ae-4634-a645-dc387a19147f/volumes/kubernetes.io~secret/proxy-tls DeviceMajor:0 DeviceMinor:980 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/1c7730337a9a87451fb670287a107087b846f8e46926bb6ce0f97f0cb44507c6/userdata/shm DeviceMajor:0 DeviceMinor:525 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-579 DeviceMajor:0 DeviceMinor:579 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/581ff17d-f121-4ece-8e45-81f1f710d163/volumes/kubernetes.io~projected/kube-api-access-pgz5w DeviceMajor:0 DeviceMinor:418 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-543 DeviceMajor:0 DeviceMinor:543 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-877 DeviceMajor:0 DeviceMinor:877 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c377a67-e763-4925-afae-a7f8546a369b/volumes/kubernetes.io~projected/kube-api-access-t6wzz DeviceMajor:0 DeviceMinor:125 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/etcd-client DeviceMajor:0 DeviceMinor:209 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-494 DeviceMajor:0 DeviceMinor:494 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2ec42095-36f5-48cf-af9d-e7a60f6cb121/volumes/kubernetes.io~projected/kube-api-access-hngc8 DeviceMajor:0 DeviceMinor:1039 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/6383bf63a7de4dff04fb7232e0771348dcd4ed98fc693d66e08acc1fc0e8ce69/userdata/shm DeviceMajor:0 DeviceMinor:717 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/a1c5dbaa4dceb86f442ef113d610b47a414073825f45b1abbdb54ba9c2a0c83a/userdata/shm DeviceMajor:0 DeviceMinor:433 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b74de987-7962-425e-9447-24b285eb888f/volumes/kubernetes.io~projected/kube-api-access-845hm DeviceMajor:0 DeviceMinor:591 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-679 DeviceMajor:0 DeviceMinor:679 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-109 DeviceMajor:0 DeviceMinor:109 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1043 DeviceMajor:0 DeviceMinor:1043 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-84 DeviceMajor:0 DeviceMinor:84 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-179 DeviceMajor:0 DeviceMinor:179 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/volumes/kubernetes.io~projected/kube-api-access-b4qsk DeviceMajor:0 DeviceMinor:239 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-303 DeviceMajor:0 DeviceMinor:303 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-719 DeviceMajor:0 DeviceMinor:719 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65dd1dc7-1b90-40f6-82c9-dee90a1fa852/volumes/kubernetes.io~projected/kube-api-access-vt62j DeviceMajor:0 DeviceMinor:602 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-69 DeviceMajor:0 DeviceMinor:69 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/2760a216-fd4b-46d9-a4ec-2d3285ec02bd/volumes/kubernetes.io~secret/machine-api-operator-tls DeviceMajor:0 DeviceMinor:536 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-195 DeviceMajor:0 DeviceMinor:195 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/536a2de1-e13c-47d1-b61d-88e0a5fd2851/volumes/kubernetes.io~projected/kube-api-access-pt5g7 DeviceMajor:0 DeviceMinor:648 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-659 DeviceMajor:0 DeviceMinor:659 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/75a53c09-210a-4346-99b0-a632b9e0a3c9/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:460 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/volumes/kubernetes.io~secret/metrics-certs DeviceMajor:0 DeviceMinor:516 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-343 DeviceMajor:0 DeviceMinor:343 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1073 DeviceMajor:0 DeviceMinor:1073 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-577 DeviceMajor:0 DeviceMinor:577 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-581 DeviceMajor:0 DeviceMinor:581 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/volumes/kubernetes.io~projected/ca-certs DeviceMajor:0 DeviceMinor:678 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/dbcb4b80-425a-4dd5-93a8-bb462f641ef1/volumes/kubernetes.io~projected/kube-api-access-sd26j DeviceMajor:0 DeviceMinor:90 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/49a28ab7-1176-4213-b037-19fe18bbe57b/volumes/kubernetes.io~projected/kube-api-access-n58nf DeviceMajor:0 DeviceMinor:127 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-67 DeviceMajor:0 DeviceMinor:67 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/9d2f93bd-e4ce-4ed2-b249-946338f753ed/volumes/kubernetes.io~projected/kube-api-access-qq6v6 DeviceMajor:0 DeviceMinor:808 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/9d7ef7e44d8730ad2d704e378ac9c92d16d1c8fa25bdd5cfebf66d699f0e0906/userdata/shm DeviceMajor:0 DeviceMinor:707 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0/userdata/shm DeviceMajor:0 DeviceMinor:425 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-937 DeviceMajor:0 DeviceMinor:937 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/31f19d97-50f9-4486-a8f9-df61ef2b0528/volumes/kubernetes.io~projected/kube-api-access-4bzs5 DeviceMajor:0 DeviceMinor:249 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-391 DeviceMajor:0 DeviceMinor:391 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-451 DeviceMajor:0 DeviceMinor:451 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/629398d15647c2b03b039e1c1901983e50f62b43495a0b3d1356a29ab7579f04/userdata/shm DeviceMajor:0 DeviceMinor:259 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/7d874a21-43aa-4d81-b904-853fb3da5a94/volumes/kubernetes.io~secret/metrics-tls DeviceMajor:0 DeviceMinor:462 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-538 DeviceMajor:0 DeviceMinor:538 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-750 DeviceMajor:0 DeviceMinor:750 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/65ef9aae-25a5-46c6-adf3-634f8f7a29bc/volumes/kubernetes.io~secret/cluster-storage-operator-serving-cert DeviceMajor:0 DeviceMinor:826 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/ec17a1f92974fc202f31cbb68ea7af983419d8c972a92fa5e88ff84c017f8e6d/userdata/shm DeviceMajor:0 DeviceMinor:128 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/77e6cd9e-b6ef-491c-a5c3-60dab81fd752/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:223 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/b5757329-8692-4719-b3c7-b5df78110fcf/volumes/kubernetes.io~projected/kube-api-access-ztdc9 DeviceMajor:0 DeviceMinor:226 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-822 DeviceMajor:0 DeviceMinor:822 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/97073f9eaab3f9a84928efdbbff240af7a669518355dadabf3d81bed9aec4570/userdata/shm DeviceMajor:0 DeviceMinor:844 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/e4bd5af8e1a96f925e1b64f7902f036f84c366c1ef01152f845644c1aa6a1b22/userdata/shm DeviceMajor:0 DeviceMinor:466 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/ca06fac5-6707-4521-88ce-1768fede42c2/volumes/kubernetes.io~projected/kube-api-access-2pt2w DeviceMajor:0 DeviceMinor:838 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-48 DeviceMajor:0 DeviceMinor:48 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-116 DeviceMajor:0 DeviceMinor:116 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes/kubernetes.io~projected/kube-api-access-q5lg5 DeviceMajor:0 DeviceMinor:236 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run/containers/storage/overlay-containers/3be9d647691aac847285be1df15dfc7365f7b948dc0fd04d51bc4a610b82da33/userdata/shm DeviceMajor:0 DeviceMinor:465 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-439 DeviceMajor:0 DeviceMinor:439 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-1010 DeviceMajor:0 DeviceMinor:1010 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/run/containers/storage/overlay-containers/f43b125de8c8fd9d38adfd65f25335aed5effea8536c299385f910d4e86c6dd3/userdata/shm DeviceMajor:0 DeviceMinor:1041 Capacity:67108864 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-82 DeviceMajor:0 DeviceMinor:82 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/6fd82994-f4d4-49e9-8742-07e206322e76/volumes/kubernetes.io~projected/kube-api-access-k8l9r DeviceMajor:0 DeviceMinor:235 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/c6db75e5-efd1-4bfa-9941-0934d7621ba2/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:244 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:overlay_0-301 DeviceMajor:0 DeviceMinor:301 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-52 DeviceMajor:0 DeviceMinor:52 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-626 DeviceMajor:0 DeviceMinor:626 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-721 DeviceMajor:0 DeviceMinor:721 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-861 DeviceMajor:0 DeviceMinor:861 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:overlay_0-140 DeviceMajor:0 DeviceMinor:140 Capacity:214143315968 Type:vfs Inodes:104594880 HasInodes:true} {Device:/var/lib/kubelet/pods/fde89b0b-7133-4b97-9e35-51c0382bd366/volumes/kubernetes.io~projected/kube-api-access DeviceMajor:0 DeviceMinor:243 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true} {Device:/var/lib/kubelet/pods/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/volumes/kubernetes.io~secret/serving-cert DeviceMajor:0 DeviceMinor:417 Capacity:32475533312 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none} 252:16:{Name:vdb Major:252 Minor:16 Size:21474836480 Scheduler:none} 252:32:{Name:vdc Major:252 Minor:32 Size:21474836480 Scheduler:none} 252:48:{Name:vdd Major:252 Minor:48 Size:21474836480 Scheduler:none} 252:64:{Name:vde Major:252 Minor:64 Size:21474836480 Scheduler:none}] NetworkDevices:[{Name:032c2b20f604f0a MacAddress:36:46:7d:bb:07:51 Speed:10000 Mtu:8900} {Name:061fc67620de1b5 MacAddress:96:00:19:0d:1e:97 Speed:10000 Mtu:8900} {Name:075e91dd63c1e74 MacAddress:72:0d:5a:1c:83:42 Speed:10000 Mtu:8900} {Name:14fb0b2eb240219 MacAddress:d6:37:6b:48:92:73 Speed:10000 Mtu:8900} {Name:19d9989080bb992 MacAddress:c2:e6:e1:c0:72:01 Speed:10000 Mtu:8900} {Name:1c7730337a9a874 MacAddress:4e:14:17:52:02:68 Speed:10000 Mtu:8900} {Name:1ee1fa592b43fd0 MacAddress:a6:fe:ec:1e:ff:9b Speed:10000 Mtu:8900} {Name:21da7cd9c215e50 MacAddress:56:b8:8b:f8:46:d0 Speed:10000 Mtu:8900} {Name:2697e850ca89be3 MacAddress:3e:30:1b:d5:63:bc Speed:10000 Mtu:8900} {Name:29c3e604fa02f68 MacAddress:5e:c6:13:0f:bd:8b Speed:10000 Mtu:8900} {Name:33bf722525c7721 MacAddress:5a:d1:32:46:54:6a Speed:10000 Mtu:8900} {Name:343ebb9e9f7133e MacAddress:12:89:57:f0:45:93 Speed:10000 Mtu:8900} {Name:3678d76d6368f04 MacAddress:16:22:30:dd:6a:07 Speed:10000 Mtu:8900} {Name:37840cae91bb388 MacAddress:5a:7c:f0:9e:5b:a8 Speed:10000 Mtu:8900} {Name:3be9d647691aac8 MacAddress:ba:97:c3:14:7c:26 Speed:10000 Mtu:8900} {Name:4ca67e8bef4478f MacAddress:0e:7a:e2:50:ce:39 Speed:10000 Mtu:8900} {Name:50cd4dbba0595bc MacAddress:82:fd:c1:fe:22:60 Speed:10000 Mtu:8900} {Name:5cba1e5f698e98d MacAddress:ee:73:24:1d:8a:7b Speed:10000 Mtu:8900} {Name:5cdd48b8a2071aa MacAddress:be:69:9b:ab:d5:38 Speed:10000 Mtu:8900} {Name:5fc26918eff78c2 MacAddress:82:02:49:36:92:e8 Speed:10000 Mtu:8900} {Name:60159a917a34f7d MacAddress:42:77:47:aa:e9:6b Speed:10000 Mtu:8900} {Name:60de85ba97afada MacAddress:f2:d7:76:d4:35:12 Speed:10000 Mtu:8900} {Name:629398d15647c2b MacAddress:f2:33:cd:25:a8:21 Speed:10000 Mtu:8900} {Name:6383bf63a7de4df MacAddress:d2:89:78:a6:40:78 Speed:10000 Mtu:8900} {Name:65392793bd94fcb MacAddress:0e:19:ff:0e:35:45 Speed:10000 Mtu:8900} {Name:658f47ce3c2ae2a MacAddress:12:56:8a:d9:53:a8 Speed:10000 Mtu:8900} {Name:6923888f2474b26 MacAddress:7e:c0:a1:07:ff:81 Speed:10000 Mtu:8900} {Name:6d6b932de4337ed MacAddress:56:b1:17:65:41:ba Speed:10000 Mtu:8900} {Name:7045bd9f4a827f5 MacAddress:32:77:c4:a7:25:49 Speed:10000 Mtu:8900} {Name:72c7baf13da514f MacAddress:1e:ed:2e:1d:6b:d6 Speed:10000 Mtu:8900} {Name:7477b641a786f08 MacAddress:d6:b4:43:9e:aa:1c Speed:10000 Mtu:8900} {Name:770fca1b39851d4 MacAddress:ae:49:67:68:98:50 Speed:10000 Mtu:8900} {Name:7d990bd61a1e1a5 MacAddress:9a:91:60:89:aa:e5 Speed:10000 Mtu:8900} {Name:97073f9eaab3f9a MacAddress:c2:63:57:c7:56:5d Speed:10000 Mtu:8900} {Name:9d5e008bf9f6b69 MacAddress:16:fc:55:5d:c7:0e Speed:10000 Mtu:8900} {Name:9d7ef7e44d8730a MacAddress:1e:4a:83:49:80:6b Speed:10000 Mtu:8900} {Name:9ed0f2af24dce87 MacAddress:0a:ef:b9:ad:59:a8 Speed:10000 Mtu:8900} {Name:a1c5dbaa4dceb86 MacAddress:fa:97:74:8f:b0:d6 Speed:10000 Mtu:8900} {Name:aed424610f368f2 MacAddress:c6:52:af:43:58:46 Speed:10000 Mtu:8900} {Name:b055cbc200ec047 MacAddress:ee:8e:49:a1:91:12 Speed:10000 Mtu:8900} {Name:b61dc113f1a4bef MacAddress:1a:ad:68:74:79:b6 Speed:10000 Mtu:8900} {Name:br-ex MacAddress:fa:16:9e:81:f6:10 Speed:0 Mtu:9000} {Name:br-int MacAddress:a2:da:ad:9d:3f:92 Speed:0 Mtu:8900} {Name:c598fb9b925a609 MacAddress:92:f7:98:b0:c3:52 Speed:10000 Mtu:8900} {Name:cdd0c71504e94f6 MacAddress:fe:22:fb:84:82:02 Speed:10000 Mtu:8900} {Name:d243e098a2bf209 MacAddress:1e:89:54:ed:25:ca Speed:10000 Mtu:8900} {Name:d285e2cd3ad810b MacAddress:fa:98:38:e6:84:67 Speed:10000 Mtu:8900} {Name:d525ab3b1b58596 MacAddress:7e:7e:62:3c:24:6c Speed:10000 Mtu:8900} {Name:da609cd6cbb5b9e MacAddress:7a:82:86:d7:b0:5c Speed:10000 Mtu:8900} {Name:de825527d944f68 MacAddress:7e:b4:c7:9b:06:52 Speed:10000 Mtu:8900} {Name:e4bd5af8e1a96f9 MacAddress:aa:b0:6f:7d:7a:03 Speed:10000 Mtu:8900} {Name:eth0 MacAddress:fa:16:9e:81:f6:10 Speed:-1 Mtu:9000} {Name:eth1 MacAddress:fa:16:3e:f6:d3:bd Speed:-1 Mtu:9000} {Name:f26f2fe408a83b7 MacAddress:c2:48:3b:93:a1:1f Speed:10000 Mtu:8900} {Name:f90e074f8ab2848 MacAddress:6a:f0:33:ea:85:78 Speed:10000 Mtu:8900} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:80:00:02 Speed:0 Mtu:8900} {Name:ovs-system MacAddress:d6:2f:ab:d3:f0:10 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654132736 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Mar 13 01:17:32.203411 master-0 kubenswrapper[19803]: I0313 01:17:32.202724 19803 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Mar 13 01:17:32.203411 master-0 kubenswrapper[19803]: I0313 01:17:32.202806 19803 manager.go:233] Version: {KernelVersion:5.14.0-427.111.1.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202602172219-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Mar 13 01:17:32.203411 master-0 kubenswrapper[19803]: I0313 01:17:32.203098 19803 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Mar 13 01:17:32.203411 master-0 kubenswrapper[19803]: I0313 01:17:32.203267 19803 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203301 19803 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"master-0","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"500m","ephemeral-storage":"1Gi","memory":"1Gi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203554 19803 topology_manager.go:138] "Creating topology manager with none policy" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203566 19803 container_manager_linux.go:303] "Creating device plugin manager" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203574 19803 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203597 19803 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203633 19803 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203725 19803 server.go:1245] "Using root directory" path="/var/lib/kubelet" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203791 19803 kubelet.go:418] "Attempting to sync node with API server" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203804 19803 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203817 19803 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203829 19803 kubelet.go:324] "Adding apiserver pod source" Mar 13 01:17:32.203940 master-0 kubenswrapper[19803]: I0313 01:17:32.203851 19803 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 13 01:17:32.205964 master-0 kubenswrapper[19803]: I0313 01:17:32.205853 19803 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.13-8.rhaos4.18.gitd78977c.el9" apiVersion="v1" Mar 13 01:17:32.206602 master-0 kubenswrapper[19803]: I0313 01:17:32.206270 19803 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Mar 13 01:17:32.207074 master-0 kubenswrapper[19803]: I0313 01:17:32.207022 19803 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 13 01:17:32.207448 master-0 kubenswrapper[19803]: I0313 01:17:32.207393 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207454 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207475 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207490 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207504 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207547 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207561 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207574 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207590 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Mar 13 01:17:32.207595 master-0 kubenswrapper[19803]: I0313 01:17:32.207609 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Mar 13 01:17:32.208124 master-0 kubenswrapper[19803]: I0313 01:17:32.207661 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Mar 13 01:17:32.208124 master-0 kubenswrapper[19803]: I0313 01:17:32.207688 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Mar 13 01:17:32.208124 master-0 kubenswrapper[19803]: I0313 01:17:32.207769 19803 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Mar 13 01:17:32.208679 master-0 kubenswrapper[19803]: I0313 01:17:32.208494 19803 server.go:1280] "Started kubelet" Mar 13 01:17:32.209601 master-0 kubenswrapper[19803]: I0313 01:17:32.208678 19803 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 13 01:17:32.209750 master-0 systemd[1]: Started Kubernetes Kubelet. Mar 13 01:17:32.212082 master-0 kubenswrapper[19803]: I0313 01:17:32.211315 19803 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 13 01:17:32.212082 master-0 kubenswrapper[19803]: I0313 01:17:32.211493 19803 server_v1.go:47] "podresources" method="list" useActivePods=true Mar 13 01:17:32.217479 master-0 kubenswrapper[19803]: I0313 01:17:32.217413 19803 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 13 01:17:32.219632 master-0 kubenswrapper[19803]: I0313 01:17:32.218391 19803 server.go:449] "Adding debug handlers to kubelet server" Mar 13 01:17:32.229282 master-0 kubenswrapper[19803]: I0313 01:17:32.229193 19803 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 01:17:32.230306 master-0 kubenswrapper[19803]: I0313 01:17:32.229852 19803 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 01:17:32.258109 master-0 kubenswrapper[19803]: I0313 01:17:32.258036 19803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Mar 13 01:17:32.258312 master-0 kubenswrapper[19803]: I0313 01:17:32.258129 19803 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 13 01:17:32.258885 master-0 kubenswrapper[19803]: I0313 01:17:32.258788 19803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-03-14 01:02:11 +0000 UTC, rotation deadline is 2026-03-13 19:38:01.89901867 +0000 UTC Mar 13 01:17:32.259045 master-0 kubenswrapper[19803]: I0313 01:17:32.259019 19803 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 18h20m29.640008059s for next certificate rotation Mar 13 01:17:32.259207 master-0 kubenswrapper[19803]: I0313 01:17:32.258898 19803 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Mar 13 01:17:32.259346 master-0 kubenswrapper[19803]: I0313 01:17:32.258850 19803 volume_manager.go:287] "The desired_state_of_world populator starts" Mar 13 01:17:32.259555 master-0 kubenswrapper[19803]: I0313 01:17:32.259460 19803 volume_manager.go:289] "Starting Kubelet Volume Manager" Mar 13 01:17:32.261830 master-0 kubenswrapper[19803]: I0313 01:17:32.261796 19803 factory.go:55] Registering systemd factory Mar 13 01:17:32.262023 master-0 kubenswrapper[19803]: I0313 01:17:32.261957 19803 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 01:17:32.262180 master-0 kubenswrapper[19803]: I0313 01:17:32.261969 19803 factory.go:221] Registration of the systemd container factory successfully Mar 13 01:17:32.263603 master-0 kubenswrapper[19803]: I0313 01:17:32.262737 19803 factory.go:153] Registering CRI-O factory Mar 13 01:17:32.263603 master-0 kubenswrapper[19803]: I0313 01:17:32.262770 19803 factory.go:221] Registration of the crio container factory successfully Mar 13 01:17:32.263603 master-0 kubenswrapper[19803]: I0313 01:17:32.262903 19803 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Mar 13 01:17:32.263603 master-0 kubenswrapper[19803]: I0313 01:17:32.262943 19803 factory.go:103] Registering Raw factory Mar 13 01:17:32.263603 master-0 kubenswrapper[19803]: I0313 01:17:32.262970 19803 manager.go:1196] Started watching for new ooms in manager Mar 13 01:17:32.265943 master-0 kubenswrapper[19803]: I0313 01:17:32.265891 19803 manager.go:319] Starting recovery of all containers Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.280997 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-serving-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281107 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c55a215a-9a95-4f48-8668-9b76503c3044" volumeName="kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281121 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb5dee36-70a4-47a4-afc2-d3209a476362" volumeName="kubernetes.io/projected/fb5dee36-70a4-47a4-afc2-d3209a476362-kube-api-access-mvckz" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281136 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" volumeName="kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-kube-api-access-nbcg4" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281150 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3418d0fb-d0ae-4634-a645-dc387a19147f" volumeName="kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281165 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281178 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c7493b-ad9d-490e-83f3-aa28750b2b5e" volumeName="kubernetes.io/projected/95c7493b-ad9d-490e-83f3-aa28750b2b5e-kube-api-access-wds6q" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281194 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74efa52b-fd97-418a-9a44-914442633f74" volumeName="kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281213 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75a53c09-210a-4346-99b0-a632b9e0a3c9" volumeName="kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281228 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281242 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbfc2caf-126e-41b9-9b31-05f7a45d8536" volumeName="kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281260 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" volumeName="kubernetes.io/empty-dir/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-cache" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281276 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2581e5b5-8cbb-4fa5-9888-98fb572a6232" volumeName="kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281295 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59" volumeName="kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281337 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ad2904e-ece9-4d72-8683-c3e691e07497" volumeName="kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281348 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d89b5d71-5522-433e-a0bb-f2767332e744" volumeName="kubernetes.io/secret/d89b5d71-5522-433e-a0bb-f2767332e744-signing-key" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281359 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de46c12a-aa3e-442e-bcc4-365d05f50103" volumeName="kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281375 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2760a216-fd4b-46d9-a4ec-2d3285ec02bd" volumeName="kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281390 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2ec42095-36f5-48cf-af9d-e7a60f6cb121" volumeName="kubernetes.io/projected/2ec42095-36f5-48cf-af9d-e7a60f6cb121-kube-api-access-hngc8" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281403 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ef9aae-25a5-46c6-adf3-634f8f7a29bc" volumeName="kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281417 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c55a215a-9a95-4f48-8668-9b76503c3044" volumeName="kubernetes.io/projected/c55a215a-9a95-4f48-8668-9b76503c3044-kube-api-access-g8n5d" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281432 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad2a6d5-6edf-4840-89f9-47847c8dac05" volumeName="kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281448 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3bf9dde-ca5b-46b8-883c-51e88ddf52e1" volumeName="kubernetes.io/projected/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-kube-api-access" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281464 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6db75e5-efd1-4bfa-9941-0934d7621ba2" volumeName="kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281478 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d89b5d71-5522-433e-a0bb-f2767332e744" volumeName="kubernetes.io/projected/d89b5d71-5522-433e-a0bb-f2767332e744-kube-api-access-lmnh2" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281494 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" volumeName="kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281561 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65dd1dc7-1b90-40f6-82c9-dee90a1fa852" volumeName="kubernetes.io/projected/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-kube-api-access-vt62j" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281597 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69da0e58-2ae6-4d4b-b125-77e93df3d660" volumeName="kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281613 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281628 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281643 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281659 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b67a99-eada-44d7-93eb-cc3ced777fc6" volumeName="kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281673 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-image-import-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281687 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-client" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281703 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" volumeName="kubernetes.io/projected/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a-kube-api-access-5zzqj" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281721 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21110b48-25fc-434a-b156-7f6bd6064bed" volumeName="kubernetes.io/projected/21110b48-25fc-434a-b156-7f6bd6064bed-kube-api-access-9npsh" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281737 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" volumeName="kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281755 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80eb89dc-ccfc-4360-811a-82a3ef6f7b65" volumeName="kubernetes.io/projected/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-kube-api-access-t7wld" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281774 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f771149b-9d62-408e-be6f-72f575b1ec42" volumeName="kubernetes.io/projected/f771149b-9d62-408e-be6f-72f575b1ec42-kube-api-access-qmr7z" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281792 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e938267-de1f-46f7-bf78-b0b3e810c4fa" volumeName="kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281807 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b67a99-eada-44d7-93eb-cc3ced777fc6" volumeName="kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281823 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca06fac5-6707-4521-88ce-1768fede42c2" volumeName="kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281838 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" volumeName="kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281854 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" volumeName="kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281871 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281887 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="536a2de1-e13c-47d1-b61d-88e0a5fd2851" volumeName="kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-encryption-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281902 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="69da0e58-2ae6-4d4b-b125-77e93df3d660" volumeName="kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281916 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad2a6d5-6edf-4840-89f9-47847c8dac05" volumeName="kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281931 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" volumeName="kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281945 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-trusted-ca-bundle" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281961 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbcb4b80-425a-4dd5-93a8-bb462f641ef1" volumeName="kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281976 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31f19d97-50f9-4486-a8f9-df61ef2b0528" volumeName="kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.281999 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46015913-c499-49b1-a9f6-a61c6e96b13f" volumeName="kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282017 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="581ff17d-f121-4ece-8e45-81f1f710d163" volumeName="kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282035 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d874a21-43aa-4d81-b904-853fb3da5a94" volumeName="kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282051 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc" volumeName="kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282067 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e799871-735a-44e8-8193-24c5bb388928" volumeName="kubernetes.io/empty-dir/6e799871-735a-44e8-8193-24c5bb388928-snapshots" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282082 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e938267-de1f-46f7-bf78-b0b3e810c4fa" volumeName="kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282095 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7" volumeName="kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282109 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282125 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0caabde8-d49a-431d-afe5-8b283188c11c" volumeName="kubernetes.io/projected/0caabde8-d49a-431d-afe5-8b283188c11c-kube-api-access-vccjz" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282140 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21110b48-25fc-434a-b156-7f6bd6064bed" volumeName="kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282155 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2760a216-fd4b-46d9-a4ec-2d3285ec02bd" volumeName="kubernetes.io/projected/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-kube-api-access-4lqgs" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282214 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6da2aac0-42a0-45c2-93ec-b148f5889e8b" volumeName="kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-utilities" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282232 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb5dee36-70a4-47a4-afc2-d3209a476362" volumeName="kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-catalog-content" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282247 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbfc2caf-126e-41b9-9b31-05f7a45d8536" volumeName="kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282264 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0caabde8-d49a-431d-afe5-8b283188c11c" volumeName="kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-metrics-certs" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282283 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0caabde8-d49a-431d-afe5-8b283188c11c" volumeName="kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-default-certificate" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282297 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c7493b-ad9d-490e-83f3-aa28750b2b5e" volumeName="kubernetes.io/configmap/95c7493b-ad9d-490e-83f3-aa28750b2b5e-config-volume" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282313 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282356 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7" volumeName="kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282378 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6db75e5-efd1-4bfa-9941-0934d7621ba2" volumeName="kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282391 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" volumeName="kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282404 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbcb4b80-425a-4dd5-93a8-bb462f641ef1" volumeName="kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282418 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ad2904e-ece9-4d72-8683-c3e691e07497" volumeName="kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282432 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80eb89dc-ccfc-4360-811a-82a3ef6f7b65" volumeName="kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282447 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81835d51-a414-440f-889b-690561e98d6a" volumeName="kubernetes.io/empty-dir/81835d51-a414-440f-889b-690561e98d6a-cache" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282463 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81835d51-a414-440f-889b-690561e98d6a" volumeName="kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-kube-api-access-nd8dv" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282479 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de46c12a-aa3e-442e-bcc4-365d05f50103" volumeName="kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282493 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fb5dee36-70a4-47a4-afc2-d3209a476362" volumeName="kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-utilities" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282524 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" volumeName="kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282538 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91fc568a-61ad-400e-a54e-21d62e51bb17" volumeName="kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282554 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b67a99-eada-44d7-93eb-cc3ced777fc6" volumeName="kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282569 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282582 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282600 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282615 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="581ff17d-f121-4ece-8e45-81f1f710d163" volumeName="kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282795 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282814 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282833 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="536a2de1-e13c-47d1-b61d-88e0a5fd2851" volumeName="kubernetes.io/projected/536a2de1-e13c-47d1-b61d-88e0a5fd2851-kube-api-access-pt5g7" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282850 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6da2aac0-42a0-45c2-93ec-b148f5889e8b" volumeName="kubernetes.io/projected/6da2aac0-42a0-45c2-93ec-b148f5889e8b-kube-api-access-9rtds" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282863 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3bf9dde-ca5b-46b8-883c-51e88ddf52e1" volumeName="kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282880 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282893 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c55a215a-9a95-4f48-8668-9b76503c3044" volumeName="kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282909 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282925 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56e20b21-ba17-46ae-a740-0e7bd45eae5f" volumeName="kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282939 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65dd1dc7-1b90-40f6-82c9-dee90a1fa852" volumeName="kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282959 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80eb89dc-ccfc-4360-811a-82a3ef6f7b65" volumeName="kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282973 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7d874a21-43aa-4d81-b904-853fb3da5a94" volumeName="kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.282990 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e938267-de1f-46f7-bf78-b0b3e810c4fa" volumeName="kubernetes.io/projected/7e938267-de1f-46f7-bf78-b0b3e810c4fa-kube-api-access-kvmpk" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283003 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7" volumeName="kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283017 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2f93bd-e4ce-4ed2-b249-946338f753ed" volumeName="kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-catalog-content" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283031 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="161d2fa6-a541-427a-a3e9-3297102a26f5" volumeName="kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283044 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e799871-735a-44e8-8193-24c5bb388928" volumeName="kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283064 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e799871-735a-44e8-8193-24c5bb388928" volumeName="kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283078 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75a53c09-210a-4346-99b0-a632b9e0a3c9" volumeName="kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283095 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b74de987-7962-425e-9447-24b285eb888f" volumeName="kubernetes.io/projected/b74de987-7962-425e-9447-24b285eb888f-kube-api-access-845hm" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283108 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/projected/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-kube-api-access-lz8ww" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283121 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="de46c12a-aa3e-442e-bcc4-365d05f50103" volumeName="kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283136 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="536a2de1-e13c-47d1-b61d-88e0a5fd2851" volumeName="kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-client" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283151 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81835d51-a414-440f-889b-690561e98d6a" volumeName="kubernetes.io/secret/81835d51-a414-440f-889b-690561e98d6a-catalogserver-certs" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283163 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283176 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca06fac5-6707-4521-88ce-1768fede42c2" volumeName="kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283214 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8ad2a6d5-6edf-4840-89f9-47847c8dac05" volumeName="kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283231 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283241 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca06fac5-6707-4521-88ce-1768fede42c2" volumeName="kubernetes.io/projected/ca06fac5-6707-4521-88ce-1768fede42c2-kube-api-access-2pt2w" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283253 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d89b5d71-5522-433e-a0bb-f2767332e744" volumeName="kubernetes.io/configmap/d89b5d71-5522-433e-a0bb-f2767332e744-signing-cabundle" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283266 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2581e5b5-8cbb-4fa5-9888-98fb572a6232" volumeName="kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283279 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3418d0fb-d0ae-4634-a645-dc387a19147f" volumeName="kubernetes.io/projected/3418d0fb-d0ae-4634-a645-dc387a19147f-kube-api-access-tdpt2" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283292 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34889110-f282-4c2c-a2b0-620033559e1b" volumeName="kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283305 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283317 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" volumeName="kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283328 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fde89b0b-7133-4b97-9e35-51c0382bd366" volumeName="kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283340 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d163333f-fda5-4067-ad7c-6f646ae411c8" volumeName="kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283353 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" volumeName="kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283368 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0ff72b58-aca9-46f1-86ca-da8339734ac9" volumeName="kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283380 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2760a216-fd4b-46d9-a4ec-2d3285ec02bd" volumeName="kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283394 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283449 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9863f7ff-4c8d-42a3-a822-01697cf9c920" volumeName="kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-catalog-content" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283463 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2f93bd-e4ce-4ed2-b249-946338f753ed" volumeName="kubernetes.io/projected/9d2f93bd-e4ce-4ed2-b249-946338f753ed-kube-api-access-qq6v6" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283477 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21110b48-25fc-434a-b156-7f6bd6064bed" volumeName="kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283492 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75a53c09-210a-4346-99b0-a632b9e0a3c9" volumeName="kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283505 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" volumeName="kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283536 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="80eb89dc-ccfc-4360-811a-82a3ef6f7b65" volumeName="kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283549 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283560 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" volumeName="kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283571 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" volumeName="kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-ca-certs" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283585 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fd82994-f4d4-49e9-8742-07e206322e76" volumeName="kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283596 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283607 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b5757329-8692-4719-b3c7-b5df78110fcf" volumeName="kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283688 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="95c7493b-ad9d-490e-83f3-aa28750b2b5e" volumeName="kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283700 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d2f93bd-e4ce-4ed2-b249-946338f753ed" volumeName="kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-utilities" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283712 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b74de987-7962-425e-9447-24b285eb888f" volumeName="kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-tmp" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283749 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c687237e-50e5-405d-8fef-0efbc3866630" volumeName="kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283763 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21110b48-25fc-434a-b156-7f6bd6064bed" volumeName="kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283774 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="75a53c09-210a-4346-99b0-a632b9e0a3c9" volumeName="kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283816 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7e938267-de1f-46f7-bf78-b0b3e810c4fa" volumeName="kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283827 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d" volumeName="kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283838 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9863f7ff-4c8d-42a3-a822-01697cf9c920" volumeName="kubernetes.io/projected/9863f7ff-4c8d-42a3-a822-01697cf9c920-kube-api-access-44dmt" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283850 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbcb4b80-425a-4dd5-93a8-bb462f641ef1" volumeName="kubernetes.io/projected/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-kube-api-access-sd26j" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283894 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="161d2fa6-a541-427a-a3e9-3297102a26f5" volumeName="kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283923 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="536a2de1-e13c-47d1-b61d-88e0a5fd2851" volumeName="kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-trusted-ca-bundle" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283935 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="536a2de1-e13c-47d1-b61d-88e0a5fd2851" volumeName="kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.283979 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59" volumeName="kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284005 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fbfc2caf-126e-41b9-9b31-05f7a45d8536" volumeName="kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284017 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fde89b0b-7133-4b97-9e35-51c0382bd366" volumeName="kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284030 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" volumeName="kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284043 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2760a216-fd4b-46d9-a4ec-2d3285ec02bd" volumeName="kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284055 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e799871-735a-44e8-8193-24c5bb388928" volumeName="kubernetes.io/projected/6e799871-735a-44e8-8193-24c5bb388928-kube-api-access-jthxn" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284066 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ca06fac5-6707-4521-88ce-1768fede42c2" volumeName="kubernetes.io/empty-dir/ca06fac5-6707-4521-88ce-1768fede42c2-tmpfs" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284094 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fd82994-f4d4-49e9-8742-07e206322e76" volumeName="kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284106 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91fc568a-61ad-400e-a54e-21d62e51bb17" volumeName="kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284118 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b3bf9dde-ca5b-46b8-883c-51e88ddf52e1" volumeName="kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284133 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3418d0fb-d0ae-4634-a645-dc387a19147f" volumeName="kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284145 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49a28ab7-1176-4213-b037-19fe18bbe57b" volumeName="kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284164 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="56e20b21-ba17-46ae-a740-0e7bd45eae5f" volumeName="kubernetes.io/projected/56e20b21-ba17-46ae-a740-0e7bd45eae5f-kube-api-access-g89p7" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284206 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65ef9aae-25a5-46c6-adf3-634f8f7a29bc" volumeName="kubernetes.io/projected/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-kube-api-access-psvcz" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284218 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6e799871-735a-44e8-8193-24c5bb388928" volumeName="kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284229 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6fd82994-f4d4-49e9-8742-07e206322e76" volumeName="kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284240 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2581e5b5-8cbb-4fa5-9888-98fb572a6232" volumeName="kubernetes.io/projected/2581e5b5-8cbb-4fa5-9888-98fb572a6232-kube-api-access-gh7ks" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284252 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="536a2de1-e13c-47d1-b61d-88e0a5fd2851" volumeName="kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-serving-ca" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284263 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74efa52b-fd97-418a-9a44-914442633f74" volumeName="kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284274 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c377a67-e763-4925-afae-a7f8546a369b" volumeName="kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz" seLinuxMountContext="" Mar 13 01:17:32.286901 master-0 kubenswrapper[19803]: I0313 01:17:32.284287 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81835d51-a414-440f-889b-690561e98d6a" volumeName="kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-ca-certs" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284298 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b74de987-7962-425e-9447-24b285eb888f" volumeName="kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-etc-tuned" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284308 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c6db75e5-efd1-4bfa-9941-0934d7621ba2" volumeName="kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284349 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" volumeName="kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284377 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="21110b48-25fc-434a-b156-7f6bd6064bed" volumeName="kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284389 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46015913-c499-49b1-a9f6-a61c6e96b13f" volumeName="kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284401 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="581ff17d-f121-4ece-8e45-81f1f710d163" volumeName="kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284413 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="74efa52b-fd97-418a-9a44-914442633f74" volumeName="kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284443 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284454 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" volumeName="kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284467 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31f19d97-50f9-4486-a8f9-df61ef2b0528" volumeName="kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284479 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6da2aac0-42a0-45c2-93ec-b148f5889e8b" volumeName="kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-catalog-content" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284490 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284542 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" volumeName="kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284554 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284565 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284577 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc" volumeName="kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284592 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0caabde8-d49a-431d-afe5-8b283188c11c" volumeName="kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-stats-auth" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284604 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="46015913-c499-49b1-a9f6-a61c6e96b13f" volumeName="kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284617 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="536a2de1-e13c-47d1-b61d-88e0a5fd2851" volumeName="kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-policies" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284629 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0" volumeName="kubernetes.io/projected/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-kube-api-access-98t7n" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284640 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91fc568a-61ad-400e-a54e-21d62e51bb17" volumeName="kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284653 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd264af8-4ced-40c4-b4f6-202bab42d0cb" volumeName="kubernetes.io/projected/bd264af8-4ced-40c4-b4f6-202bab42d0cb-kube-api-access-xcf2h" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284664 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" volumeName="kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-encryption-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284676 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" volumeName="kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284689 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0caabde8-d49a-431d-afe5-8b283188c11c" volumeName="kubernetes.io/configmap/0caabde8-d49a-431d-afe5-8b283188c11c-service-ca-bundle" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284702 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" volumeName="kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284715 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="581ff17d-f121-4ece-8e45-81f1f710d163" volumeName="kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284726 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7" volumeName="kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284737 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dbcb4b80-425a-4dd5-93a8-bb462f641ef1" volumeName="kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284748 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fde89b0b-7133-4b97-9e35-51c0382bd366" volumeName="kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284762 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="65dd1dc7-1b90-40f6-82c9-dee90a1fa852" volumeName="kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284774 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="91fc568a-61ad-400e-a54e-21d62e51bb17" volumeName="kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284786 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9863f7ff-4c8d-42a3-a822-01697cf9c920" volumeName="kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-utilities" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284798 19803 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" volumeName="kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca" seLinuxMountContext="" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284808 19803 reconstruct.go:97] "Volume reconstruction finished" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.284817 19803 reconciler.go:26] "Reconciler: start to sync state" Mar 13 01:17:32.301958 master-0 kubenswrapper[19803]: I0313 01:17:32.289049 19803 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Mar 13 01:17:32.311086 master-0 kubenswrapper[19803]: I0313 01:17:32.310939 19803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 13 01:17:32.313065 master-0 kubenswrapper[19803]: I0313 01:17:32.313026 19803 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 13 01:17:32.313154 master-0 kubenswrapper[19803]: I0313 01:17:32.313087 19803 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 13 01:17:32.313154 master-0 kubenswrapper[19803]: I0313 01:17:32.313123 19803 kubelet.go:2335] "Starting kubelet main sync loop" Mar 13 01:17:32.313282 master-0 kubenswrapper[19803]: E0313 01:17:32.313196 19803 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Mar 13 01:17:32.317092 master-0 kubenswrapper[19803]: I0313 01:17:32.316795 19803 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 01:17:32.325777 master-0 kubenswrapper[19803]: I0313 01:17:32.325485 19803 generic.go:334] "Generic (PLEG): container finished" podID="536a2de1-e13c-47d1-b61d-88e0a5fd2851" containerID="9403cb28b6d645239098a1a9ce49ec1906fc26f7e015e1b08e21da092fbdcce4" exitCode=0 Mar 13 01:17:32.338003 master-0 kubenswrapper[19803]: I0313 01:17:32.337951 19803 generic.go:334] "Generic (PLEG): container finished" podID="96b67a99-eada-44d7-93eb-cc3ced777fc6" containerID="cc1038b189ab36843989b837c930bbf20934f08cf043e09fd788646b7d078f2a" exitCode=0 Mar 13 01:17:32.341771 master-0 kubenswrapper[19803]: I0313 01:17:32.341724 19803 generic.go:334] "Generic (PLEG): container finished" podID="9863f7ff-4c8d-42a3-a822-01697cf9c920" containerID="3f043b4a215a970a593ef894cb43fbc8629b221e80d790f74a2607306302a1c4" exitCode=0 Mar 13 01:17:32.341771 master-0 kubenswrapper[19803]: I0313 01:17:32.341767 19803 generic.go:334] "Generic (PLEG): container finished" podID="9863f7ff-4c8d-42a3-a822-01697cf9c920" containerID="e6fb5566e61aacae6cae75fa3a8129afd169d9d82e676f7571f17acc0384df03" exitCode=0 Mar 13 01:17:32.357595 master-0 kubenswrapper[19803]: I0313 01:17:32.356377 19803 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89" exitCode=0 Mar 13 01:17:32.359921 master-0 kubenswrapper[19803]: I0313 01:17:32.359888 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-controller-manager-operator_openshift-controller-manager-operator-8565d84698-7rhdg_74efa52b-fd97-418a-9a44-914442633f74/openshift-controller-manager-operator/2.log" Mar 13 01:17:32.359977 master-0 kubenswrapper[19803]: I0313 01:17:32.359940 19803 generic.go:334] "Generic (PLEG): container finished" podID="74efa52b-fd97-418a-9a44-914442633f74" containerID="9c0bd715b837c01a89df34dba5a1abd4f477608efb9ac5a6df89d6b122c0876b" exitCode=255 Mar 13 01:17:32.362649 master-0 kubenswrapper[19803]: I0313 01:17:32.362506 19803 generic.go:334] "Generic (PLEG): container finished" podID="19460daa-7d22-4d32-899c-274b86c56a13" containerID="ffc5eb0505bcd1aede3306af3760c2bce7320e07eb88bcd177785bc53255cfa2" exitCode=0 Mar 13 01:17:32.368356 master-0 kubenswrapper[19803]: I0313 01:17:32.368311 19803 generic.go:334] "Generic (PLEG): container finished" podID="fbfc2caf-126e-41b9-9b31-05f7a45d8536" containerID="5436fbc43037209189594bd015e39350294b9b8da6b6096cb145d36bfb03543f" exitCode=0 Mar 13 01:17:32.377681 master-0 kubenswrapper[19803]: I0313 01:17:32.377635 19803 generic.go:334] "Generic (PLEG): container finished" podID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerID="0f4de141c58d0310f424a3def148eab28bc960622ee39d63fb837590fa97a3c8" exitCode=0 Mar 13 01:17:32.382057 master-0 kubenswrapper[19803]: I0313 01:17:32.382002 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-n4252_07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/manager/0.log" Mar 13 01:17:32.382140 master-0 kubenswrapper[19803]: I0313 01:17:32.382092 19803 generic.go:334] "Generic (PLEG): container finished" podID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerID="fd379745af9da3dead649206438373348f4ca6dba57dff1deac4d0df35fc6fc1" exitCode=1 Mar 13 01:17:32.388838 master-0 kubenswrapper[19803]: I0313 01:17:32.388783 19803 generic.go:334] "Generic (PLEG): container finished" podID="c6db75e5-efd1-4bfa-9941-0934d7621ba2" containerID="c248d157af93f66dc74e732d276f334cdb9f66f93ff85dda8f8ef75466a1cda2" exitCode=0 Mar 13 01:17:32.397075 master-0 kubenswrapper[19803]: I0313 01:17:32.395460 19803 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="10e54ccf1c79035f275fa3427f827eeb618189c70d330140baae622cfa30b962" exitCode=0 Mar 13 01:17:32.402608 master-0 kubenswrapper[19803]: I0313 01:17:32.401001 19803 generic.go:334] "Generic (PLEG): container finished" podID="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" containerID="3b5d590cab289e687af0089813cf69faee5c388307bbafba8b29486da0d45d2a" exitCode=0 Mar 13 01:17:32.402608 master-0 kubenswrapper[19803]: I0313 01:17:32.401047 19803 generic.go:334] "Generic (PLEG): container finished" podID="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" containerID="d71905c580f15e2bd3a3f12e29fbae0f3bf41f215518cae86b4ede0ed005dd7f" exitCode=0 Mar 13 01:17:32.408302 master-0 kubenswrapper[19803]: I0313 01:17:32.408278 19803 generic.go:334] "Generic (PLEG): container finished" podID="f2f0667c-90d6-4a6b-b540-9bd0ab5973ea" containerID="db75a500d25df1d35034bc9e7d835e3af06e992e3af2605476ce0e45095ba6b9" exitCode=0 Mar 13 01:17:32.411364 master-0 kubenswrapper[19803]: I0313 01:17:32.411299 19803 generic.go:334] "Generic (PLEG): container finished" podID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerID="94468d369b5f43adf08abc9d6a6230238254bef0eb81d4e6a3d5e925f29bcc13" exitCode=0 Mar 13 01:17:32.413335 master-0 kubenswrapper[19803]: E0313 01:17:32.413299 19803 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 01:17:32.427090 master-0 kubenswrapper[19803]: I0313 01:17:32.427022 19803 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="0234ab75b7bd5b13b1837cf8436f89b14014ac9adcda65e897e6eb1551c1103a" exitCode=0 Mar 13 01:17:32.427090 master-0 kubenswrapper[19803]: I0313 01:17:32.427064 19803 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="2624aa9d22934134d13192016a21d94a8ed206c5e3cce209796939167e9e62b2" exitCode=0 Mar 13 01:17:32.427090 master-0 kubenswrapper[19803]: I0313 01:17:32.427072 19803 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="79b311e1fab325ef8d97bf345a46f71efc38634e77d8ae4e5e2904a28462f5b3" exitCode=0 Mar 13 01:17:32.427090 master-0 kubenswrapper[19803]: I0313 01:17:32.427080 19803 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="2b884799b97327428feac7cdc419e91ce2a3eaeb0bebe09185e54d595c2b45d1" exitCode=0 Mar 13 01:17:32.427090 master-0 kubenswrapper[19803]: I0313 01:17:32.427087 19803 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="1c472f002bfa4991c063677c722842d806f2f0b4d30948f00ee774d9c40c71d2" exitCode=0 Mar 13 01:17:32.427090 master-0 kubenswrapper[19803]: I0313 01:17:32.427095 19803 generic.go:334] "Generic (PLEG): container finished" podID="f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd" containerID="10183ca532088fab9b3fb6cb86be21e2b5c24c18173f81ce8ac9d9efb43524c5" exitCode=0 Mar 13 01:17:32.430290 master-0 kubenswrapper[19803]: I0313 01:17:32.430251 19803 generic.go:334] "Generic (PLEG): container finished" podID="348e0611-5b3c-4238-a571-813fc16057df" containerID="53dcbd61cdb4ba2de960bb2099fda9de5cc31628732654b744e0b56ff9b97460" exitCode=0 Mar 13 01:17:32.435661 master-0 kubenswrapper[19803]: I0313 01:17:32.435612 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/2.log" Mar 13 01:17:32.435747 master-0 kubenswrapper[19803]: I0313 01:17:32.435670 19803 generic.go:334] "Generic (PLEG): container finished" podID="b5757329-8692-4719-b3c7-b5df78110fcf" containerID="25381ad36be0f85f98a8e3ecc8a5f4186dffd21de460ff1a56fc27b43bbb1f04" exitCode=255 Mar 13 01:17:32.440831 master-0 kubenswrapper[19803]: I0313 01:17:32.440783 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_kube-rbac-proxy-crio-master-0_e9add8df47182fc2eaf8cd78016ebe72/kube-rbac-proxy-crio/2.log" Mar 13 01:17:32.441448 master-0 kubenswrapper[19803]: I0313 01:17:32.441381 19803 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c" exitCode=1 Mar 13 01:17:32.441448 master-0 kubenswrapper[19803]: I0313 01:17:32.441439 19803 generic.go:334] "Generic (PLEG): container finished" podID="e9add8df47182fc2eaf8cd78016ebe72" containerID="a0021a247a97b068e059ad5f822a94ffb91a3ed3409e6c3e37ac414a6210ce2d" exitCode=0 Mar 13 01:17:32.446493 master-0 kubenswrapper[19803]: I0313 01:17:32.446451 19803 generic.go:334] "Generic (PLEG): container finished" podID="23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b" containerID="b30ae4d37e850868384d04498318b52f585a63274ae43d082fa8cb4389cea8b3" exitCode=0 Mar 13 01:17:32.453093 master-0 kubenswrapper[19803]: I0313 01:17:32.453045 19803 generic.go:334] "Generic (PLEG): container finished" podID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" containerID="7b8fcf0165d80adda60451116dbf0d6712f4aa8b3cf335302becbea472ed8b9a" exitCode=0 Mar 13 01:17:32.454833 master-0 kubenswrapper[19803]: I0313 01:17:32.454802 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/0.log" Mar 13 01:17:32.455112 master-0 kubenswrapper[19803]: I0313 01:17:32.455084 19803 generic.go:334] "Generic (PLEG): container finished" podID="81835d51-a414-440f-889b-690561e98d6a" containerID="e9eb86bc8639ac87892dc75bde4aa22bd6e683c301d4d69ac50acf0d02a2db39" exitCode=1 Mar 13 01:17:32.460847 master-0 kubenswrapper[19803]: I0313 01:17:32.460796 19803 generic.go:334] "Generic (PLEG): container finished" podID="6da2aac0-42a0-45c2-93ec-b148f5889e8b" containerID="e494bdc5d34f6d35be15c841021162373cc2a0a39223427d66e514de073d9457" exitCode=0 Mar 13 01:17:32.460847 master-0 kubenswrapper[19803]: I0313 01:17:32.460838 19803 generic.go:334] "Generic (PLEG): container finished" podID="6da2aac0-42a0-45c2-93ec-b148f5889e8b" containerID="1e251dae2aaa8815d73b243c1cd351484535753e760cb3f4fe039313f2622d66" exitCode=0 Mar 13 01:17:32.471999 master-0 kubenswrapper[19803]: I0313 01:17:32.471941 19803 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="8df1059c68299a3330235cc4d111397a59bfb0c4b40d95af664427109c129231" exitCode=0 Mar 13 01:17:32.471999 master-0 kubenswrapper[19803]: I0313 01:17:32.471983 19803 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="6c9bd5245949231d7973259139b8774c20bbb32018502eb3bd133d4e8aa89584" exitCode=0 Mar 13 01:17:32.471999 master-0 kubenswrapper[19803]: I0313 01:17:32.471992 19803 generic.go:334] "Generic (PLEG): container finished" podID="5f77c8e18b751d90bc0dfe2d4e304050" containerID="b6ea782ca75304abc2ccc9ab19e6d9b4a2889fe649ebf475c9c95d91d8dba102" exitCode=0 Mar 13 01:17:32.480080 master-0 kubenswrapper[19803]: I0313 01:17:32.480050 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-mcps9_c687237e-50e5-405d-8fef-0efbc3866630/approver/0.log" Mar 13 01:17:32.480493 master-0 kubenswrapper[19803]: I0313 01:17:32.480443 19803 generic.go:334] "Generic (PLEG): container finished" podID="c687237e-50e5-405d-8fef-0efbc3866630" containerID="826ddf0fad5a47b74a9e97796304f54274bf436e1dab02b9917102d0ced785b8" exitCode=1 Mar 13 01:17:32.484659 master-0 kubenswrapper[19803]: I0313 01:17:32.484619 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="dc0cc2d6bf9be0a194a0217c205d2ab79cbfb7d5acd7c9e8902600ce17ed4649" exitCode=0 Mar 13 01:17:32.484659 master-0 kubenswrapper[19803]: I0313 01:17:32.484651 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="03b6f556b130d09fe1680dbfd846eba4b3a8ef627f216c08cf30ba1c6140ea1c" exitCode=0 Mar 13 01:17:32.484659 master-0 kubenswrapper[19803]: I0313 01:17:32.484660 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="4c5b2d8c08ccdfef9dcab32e4f7ca60deac949b04ad9ebcfbb4f605f23b2baeb" exitCode=0 Mar 13 01:17:32.487813 master-0 kubenswrapper[19803]: I0313 01:17:32.487777 19803 generic.go:334] "Generic (PLEG): container finished" podID="be2913a0-453b-4b24-ab2c-b8ef2ad3ac16" containerID="bce7dc8174f12b3e41c7f7d3531e034e590edcaa83e3928c6f42ad9ec7e9122d" exitCode=0 Mar 13 01:17:32.493123 master-0 kubenswrapper[19803]: I0313 01:17:32.493092 19803 generic.go:334] "Generic (PLEG): container finished" podID="9d2f93bd-e4ce-4ed2-b249-946338f753ed" containerID="85929f4bdc709951d2ed40828c44291860167df639f2be4b11644838c712256b" exitCode=0 Mar 13 01:17:32.493123 master-0 kubenswrapper[19803]: I0313 01:17:32.493120 19803 generic.go:334] "Generic (PLEG): container finished" podID="9d2f93bd-e4ce-4ed2-b249-946338f753ed" containerID="0e8798fe2e8ef33cc2b91fe39e59f52189be2b65c2d2ed1095f875a54002ee95" exitCode=0 Mar 13 01:17:32.495371 master-0 kubenswrapper[19803]: I0313 01:17:32.495233 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-operator_network-operator-7c649bf6d4-4zrk7_dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc/network-operator/0.log" Mar 13 01:17:32.495371 master-0 kubenswrapper[19803]: I0313 01:17:32.495276 19803 generic.go:334] "Generic (PLEG): container finished" podID="dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc" containerID="7f4c53a355951175886abfb80eb4256c32b51f0ad7d9c970345c8e4c70d93ccb" exitCode=255 Mar 13 01:17:32.498055 master-0 kubenswrapper[19803]: I0313 01:17:32.498018 19803 generic.go:334] "Generic (PLEG): container finished" podID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerID="dcf6d152312d68c0bbfc80742b97a5a67fe4e1a416cc5f56000de592b4daaaa8" exitCode=0 Mar 13 01:17:32.518409 master-0 kubenswrapper[19803]: I0313 01:17:32.518263 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/0.log" Mar 13 01:17:32.519648 master-0 kubenswrapper[19803]: I0313 01:17:32.518723 19803 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="544c375d0985569800e6f6387597c6bbdd7b9967f0bc5e80927a60f7a9628d80" exitCode=255 Mar 13 01:17:32.519648 master-0 kubenswrapper[19803]: I0313 01:17:32.518746 19803 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="b07ddec5ef3c1ac03f780236e9b354e58153c6ffb31f2047f7405a97d9d4d4c1" exitCode=0 Mar 13 01:17:32.529691 master-0 kubenswrapper[19803]: I0313 01:17:32.529610 19803 generic.go:334] "Generic (PLEG): container finished" podID="fde89b0b-7133-4b97-9e35-51c0382bd366" containerID="aa8d570cc916b085b102875f5c8076691d32fc0570491e0ffdf16bc87e8e94b9" exitCode=0 Mar 13 01:17:32.532305 master-0 kubenswrapper[19803]: I0313 01:17:32.532273 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/0.log" Mar 13 01:17:32.532437 master-0 kubenswrapper[19803]: I0313 01:17:32.532323 19803 generic.go:334] "Generic (PLEG): container finished" podID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" containerID="743c555e1cf0c98c73695ed678affcb2226d9582a12dd77e2de535512f78c66d" exitCode=1 Mar 13 01:17:32.535179 master-0 kubenswrapper[19803]: I0313 01:17:32.535137 19803 generic.go:334] "Generic (PLEG): container finished" podID="fb5dee36-70a4-47a4-afc2-d3209a476362" containerID="61bf0fbf4501061e78c007eaf05936de96edb76fe74c0218e6d72868ece9ed9a" exitCode=0 Mar 13 01:17:32.535179 master-0 kubenswrapper[19803]: I0313 01:17:32.535174 19803 generic.go:334] "Generic (PLEG): container finished" podID="fb5dee36-70a4-47a4-afc2-d3209a476362" containerID="8b167e4b932b64d1bd8542773273ff5f0d06008ccdbf22a27a549d7fe3c912eb" exitCode=0 Mar 13 01:17:32.565342 master-0 kubenswrapper[19803]: I0313 01:17:32.565296 19803 generic.go:334] "Generic (PLEG): container finished" podID="7106c6fe-7c8d-45b9-bc5c-521db743663f" containerID="9dea5041e065ce99780170074cdc1fcbcd589815d7a4ea10ac0c5a7ebf2078b0" exitCode=0 Mar 13 01:17:32.571683 master-0 kubenswrapper[19803]: I0313 01:17:32.571657 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-p5c8r_75a53c09-210a-4346-99b0-a632b9e0a3c9/ingress-operator/0.log" Mar 13 01:17:32.571882 master-0 kubenswrapper[19803]: I0313 01:17:32.571858 19803 generic.go:334] "Generic (PLEG): container finished" podID="75a53c09-210a-4346-99b0-a632b9e0a3c9" containerID="951aa4d6803ad0268be9d58f3b51ebac5555d4f85866ee29a2837692062094ee" exitCode=1 Mar 13 01:17:32.573448 master-0 kubenswrapper[19803]: I0313 01:17:32.573406 19803 generic.go:334] "Generic (PLEG): container finished" podID="fdcd8438-d33f-490f-a841-8944c58506f8" containerID="263627f8d8439063ebce2b99f2d70b421aed9f9cb196a75460d6a6b14ebb0fe5" exitCode=0 Mar 13 01:17:32.577068 master-0 kubenswrapper[19803]: I0313 01:17:32.577039 19803 generic.go:334] "Generic (PLEG): container finished" podID="a1a56802af72ce1aac6b5077f1695ac0" containerID="9ffa27ab0dc3e98ab44b8a36575c0b8aebd551a30b7af7d3a867758695337923" exitCode=0 Mar 13 01:17:32.602649 master-0 kubenswrapper[19803]: I0313 01:17:32.591223 19803 generic.go:334] "Generic (PLEG): container finished" podID="8c377a67-e763-4925-afae-a7f8546a369b" containerID="7e4809732e6f42f6e1aaeab2220c5d3d3098fc28ea26ac8cc73446ea1b10cd93" exitCode=0 Mar 13 01:17:32.605476 master-0 kubenswrapper[19803]: I0313 01:17:32.605415 19803 generic.go:334] "Generic (PLEG): container finished" podID="49a28ab7-1176-4213-b037-19fe18bbe57b" containerID="84a75bf6c5b0aae138001278a5abd61d9c21955abcbf0e21925aa4e975040741" exitCode=0 Mar 13 01:17:32.613490 master-0 kubenswrapper[19803]: E0313 01:17:32.613385 19803 kubelet.go:2359] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Mar 13 01:17:32.785607 master-0 kubenswrapper[19803]: I0313 01:17:32.785307 19803 manager.go:324] Recovery completed Mar 13 01:17:32.864135 master-0 kubenswrapper[19803]: I0313 01:17:32.864012 19803 cpu_manager.go:225] "Starting CPU manager" policy="none" Mar 13 01:17:32.864135 master-0 kubenswrapper[19803]: I0313 01:17:32.864052 19803 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Mar 13 01:17:32.864135 master-0 kubenswrapper[19803]: I0313 01:17:32.864084 19803 state_mem.go:36] "Initialized new in-memory state store" Mar 13 01:17:32.864356 master-0 kubenswrapper[19803]: I0313 01:17:32.864338 19803 state_mem.go:88] "Updated default CPUSet" cpuSet="" Mar 13 01:17:32.864389 master-0 kubenswrapper[19803]: I0313 01:17:32.864359 19803 state_mem.go:96] "Updated CPUSet assignments" assignments={} Mar 13 01:17:32.864419 master-0 kubenswrapper[19803]: I0313 01:17:32.864392 19803 state_checkpoint.go:136] "State checkpoint: restored state from checkpoint" Mar 13 01:17:32.864419 master-0 kubenswrapper[19803]: I0313 01:17:32.864405 19803 state_checkpoint.go:137] "State checkpoint: defaultCPUSet" defaultCpuSet="" Mar 13 01:17:32.864419 master-0 kubenswrapper[19803]: I0313 01:17:32.864418 19803 policy_none.go:49] "None policy: Start" Mar 13 01:17:32.884673 master-0 kubenswrapper[19803]: I0313 01:17:32.876282 19803 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 13 01:17:32.884673 master-0 kubenswrapper[19803]: I0313 01:17:32.876334 19803 state_mem.go:35] "Initializing new in-memory state store" Mar 13 01:17:32.884673 master-0 kubenswrapper[19803]: I0313 01:17:32.876673 19803 state_mem.go:75] "Updated machine memory state" Mar 13 01:17:32.884673 master-0 kubenswrapper[19803]: I0313 01:17:32.876685 19803 state_checkpoint.go:82] "State checkpoint: restored state from checkpoint" Mar 13 01:17:32.895794 master-0 kubenswrapper[19803]: I0313 01:17:32.895737 19803 manager.go:334] "Starting Device Plugin manager" Mar 13 01:17:32.895943 master-0 kubenswrapper[19803]: I0313 01:17:32.895830 19803 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 13 01:17:32.895943 master-0 kubenswrapper[19803]: I0313 01:17:32.895851 19803 server.go:79] "Starting device plugin registration server" Mar 13 01:17:32.896448 master-0 kubenswrapper[19803]: I0313 01:17:32.896411 19803 eviction_manager.go:189] "Eviction manager: starting control loop" Mar 13 01:17:32.896550 master-0 kubenswrapper[19803]: I0313 01:17:32.896438 19803 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 13 01:17:32.897440 master-0 kubenswrapper[19803]: I0313 01:17:32.897185 19803 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Mar 13 01:17:32.897440 master-0 kubenswrapper[19803]: I0313 01:17:32.897266 19803 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Mar 13 01:17:32.897440 master-0 kubenswrapper[19803]: I0313 01:17:32.897274 19803 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 13 01:17:33.000133 master-0 kubenswrapper[19803]: I0313 01:17:33.000093 19803 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Mar 13 01:17:33.006661 master-0 kubenswrapper[19803]: I0313 01:17:33.006635 19803 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientMemory" Mar 13 01:17:33.006805 master-0 kubenswrapper[19803]: I0313 01:17:33.006793 19803 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasNoDiskPressure" Mar 13 01:17:33.006877 master-0 kubenswrapper[19803]: I0313 01:17:33.006866 19803 kubelet_node_status.go:724] "Recording event message for node" node="master-0" event="NodeHasSufficientPID" Mar 13 01:17:33.007049 master-0 kubenswrapper[19803]: I0313 01:17:33.007038 19803 kubelet_node_status.go:76] "Attempting to register node" node="master-0" Mar 13 01:17:33.013678 master-0 kubenswrapper[19803]: I0313 01:17:33.013585 19803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0","openshift-kube-apiserver/kube-apiserver-master-0","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0"] Mar 13 01:17:33.014844 master-0 kubenswrapper[19803]: I0313 01:17:33.014747 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f417e14665db2ffffa887ce21c9ff0ed","Type":"ContainerStarted","Data":"1343b3441a72fc54f57c90f1ad8e6009baa9cad0afaf07655566864af4172871"} Mar 13 01:17:33.014910 master-0 kubenswrapper[19803]: I0313 01:17:33.014846 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"f417e14665db2ffffa887ce21c9ff0ed","Type":"ContainerStarted","Data":"cfc26f6d3347a68e4b723da2b42435408304ba3ab936c3e96d2706d8fe04b73e"} Mar 13 01:17:33.014910 master-0 kubenswrapper[19803]: I0313 01:17:33.014880 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="731c43764bf6ca60ccb49818767715764d1313ac8e97ad985509652329db44a1" Mar 13 01:17:33.014910 master-0 kubenswrapper[19803]: I0313 01:17:33.014894 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918"} Mar 13 01:17:33.014910 master-0 kubenswrapper[19803]: I0313 01:17:33.014904 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702"} Mar 13 01:17:33.015029 master-0 kubenswrapper[19803]: I0313 01:17:33.014917 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"1671c753884a85b9d5990bcf5a091faa5ed2c13052477fadfd66f9da210dc6ae"} Mar 13 01:17:33.015029 master-0 kubenswrapper[19803]: I0313 01:17:33.014928 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3"} Mar 13 01:17:33.015029 master-0 kubenswrapper[19803]: I0313 01:17:33.014938 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6"} Mar 13 01:17:33.015029 master-0 kubenswrapper[19803]: I0313 01:17:33.014947 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerDied","Data":"ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89"} Mar 13 01:17:33.015029 master-0 kubenswrapper[19803]: I0313 01:17:33.014977 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"d3c9a7ae76767c58b811cabb43c24171c3fc11aa2f0559500ff39ed6ef226896"} Mar 13 01:17:33.015029 master-0 kubenswrapper[19803]: I0313 01:17:33.015022 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d309d321e2b3c142df3b5753d507bff20af97e5f4ec76c20a22f4d71bfceba91" Mar 13 01:17:33.015287 master-0 kubenswrapper[19803]: I0313 01:17:33.015266 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7137ef449e547dd401ee27f3f443a2af47f35fca54a0b207f6e8c71de0c42b56" Mar 13 01:17:33.015888 master-0 kubenswrapper[19803]: I0313 01:17:33.015289 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c6514526947873408e0b49fddc6682f5c16ba101c6fab277e750a3d8d114b4c" Mar 13 01:17:33.015941 master-0 kubenswrapper[19803]: I0313 01:17:33.015921 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16702ae6bf55253a1d4eab890d7c44c135c95ffb1a9130b6d582c2d745d25c4a" Mar 13 01:17:33.015941 master-0 kubenswrapper[19803]: I0313 01:17:33.015932 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"44c7d80aa4aadd7ed9cfa67d8c3f0e0defda54140db09140424d6dcf8461fe9e"} Mar 13 01:17:33.015995 master-0 kubenswrapper[19803]: I0313 01:17:33.015945 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"27da9b144d4a4f750c33de749ba64c7d7c2d328ab7a8dc23bb642f52fbaf1fd7"} Mar 13 01:17:33.016035 master-0 kubenswrapper[19803]: I0313 01:17:33.015955 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"36b85103aab608e07fe57ad44e030eaf64a6694fa43ef8b29c17a2a587b80411"} Mar 13 01:17:33.016067 master-0 kubenswrapper[19803]: I0313 01:17:33.016034 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"10e54ccf1c79035f275fa3427f827eeb618189c70d330140baae622cfa30b962"} Mar 13 01:17:33.016392 master-0 kubenswrapper[19803]: I0313 01:17:33.016047 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59"} Mar 13 01:17:33.018173 master-0 kubenswrapper[19803]: I0313 01:17:33.018133 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4e52f6642c159916a88506443432057d57f997d443e11ff2cb2903a38a0ee186" Mar 13 01:17:33.018173 master-0 kubenswrapper[19803]: I0313 01:17:33.018174 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdb40c51b631cfd8ed2d352b14bf92f1b865b72b8d5f97d0a609a8d216e8763a" Mar 13 01:17:33.018241 master-0 kubenswrapper[19803]: I0313 01:17:33.018190 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"42bca1f920cccc1592fa3eb549dd4fbc400b4f25b9bcf7ef0e6efb375c7c1e44"} Mar 13 01:17:33.018241 master-0 kubenswrapper[19803]: I0313 01:17:33.018206 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"6d8995670c2a83fdd48a121ac1de3a71b9ce55c04e64601cc3a96c583c68bc2c"} Mar 13 01:17:33.018241 master-0 kubenswrapper[19803]: I0313 01:17:33.018216 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerDied","Data":"a0021a247a97b068e059ad5f822a94ffb91a3ed3409e6c3e37ac414a6210ce2d"} Mar 13 01:17:33.018241 master-0 kubenswrapper[19803]: I0313 01:17:33.018229 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" event={"ID":"e9add8df47182fc2eaf8cd78016ebe72","Type":"ContainerStarted","Data":"365754cbfac698d37a141ce5e1eed9f4df598d676f3fa84080a6e5e7497b9846"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018252 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c0b9c0cf7cb9fa1122b0ea7980af02b767737d56971625a4ab2e9432fd86c393" Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018289 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96f6e8e91d7109dc966f1dd2cbd1b74212480a19ccee4443647cc163d94cfaba" Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018296 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018306 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018318 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"b1c12809753fc2546fb8e821c8e7f6bbad80bd3bc2111cc6731d186681cf0988"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018327 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018336 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"719fed2d09a4c83ba7a2065c6d705852286e4074c168ef17e96ec1f4c19087b7"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018356 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"d4307a8d99b06baad18f959ac230bad4c2bf7ab603532b53714a7efb8d542993"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018368 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"11afe1e82df06ef58f2b34ee7f14cab6582b1c3ebb23e73f966071d3f60bb7d3"} Mar 13 01:17:33.018363 master-0 kubenswrapper[19803]: I0313 01:17:33.018378 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"bf41e0708018a7a42a9ea985f7ec3256a3866f84520062060092284abe939c72"} Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018388 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"3b4b0099ff3715076e4da8c307cf4cdf19113ad975d741008a026d470fd6e8de"} Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018398 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"59b81ddf96703b46c61723679f4eccced325378be4bf3ce47532a5cf8c25aff1"} Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018407 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"dc0cc2d6bf9be0a194a0217c205d2ab79cbfb7d5acd7c9e8902600ce17ed4649"} Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018418 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"03b6f556b130d09fe1680dbfd846eba4b3a8ef627f216c08cf30ba1c6140ea1c"} Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018427 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerDied","Data":"4c5b2d8c08ccdfef9dcab32e4f7ca60deac949b04ad9ebcfbb4f605f23b2baeb"} Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018436 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"8e52bef89f4b50e4590a1719bcc5d7e5","Type":"ContainerStarted","Data":"9eabc21ddc531984c62d09d80b4ff970db77726a77a7e29d7793ee390a8437b9"} Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018448 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="84da267170c9e91b410d7e9d9438b6c48844d88a0a7765f16ae9587a89797c0b" Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018486 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7d7e623e8c0d9e066e1623241bc5f63e9d1f8ed656f8cc7a2cd92ed153ee3235" Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018522 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b499ba30f4ea8be865dc7a8837d7f5fa14f7ab7345bba4ad96fb42befea24a27" Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018565 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61d228ad61217efd3f38e7f1eb742a8a47bf9f51d0ed1ddebcc51b7470bf905e" Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018581 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5036dd248963b083dbf679edea9371d4e006e42fcff4a71dbda91fde659408c6" Mar 13 01:17:33.018677 master-0 kubenswrapper[19803]: I0313 01:17:33.018591 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6b3f392e02f5ed94d399a015a546ebd73a07ae53ff9ae5634f2dda7569b0d7e" Mar 13 01:17:33.022921 master-0 kubenswrapper[19803]: I0313 01:17:33.022873 19803 kubelet_node_status.go:115] "Node was previously registered" node="master-0" Mar 13 01:17:33.022988 master-0 kubenswrapper[19803]: I0313 01:17:33.022966 19803 kubelet_node_status.go:79] "Successfully registered node" node="master-0" Mar 13 01:17:33.026609 master-0 kubenswrapper[19803]: E0313 01:17:33.026569 19803 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-master-0\" already exists" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.032398 master-0 kubenswrapper[19803]: E0313 01:17:33.032348 19803 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-master-0\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:33.032601 master-0 kubenswrapper[19803]: E0313 01:17:33.032574 19803 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-rbac-proxy-crio-master-0\" already exists" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:17:33.093801 master-0 kubenswrapper[19803]: I0313 01:17:33.093737 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.093801 master-0 kubenswrapper[19803]: I0313 01:17:33.093790 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.093801 master-0 kubenswrapper[19803]: I0313 01:17:33.093811 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093829 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093850 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093868 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093896 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093909 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093925 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093944 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093960 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093979 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.093994 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.094009 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.094029 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.094048 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.094063 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.094079 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.094094 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.094498 master-0 kubenswrapper[19803]: I0313 01:17:33.094110 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.129275 master-0 kubenswrapper[19803]: E0313 01:17:33.129146 19803 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"etcd-master-0\" already exists" pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.130218 master-0 kubenswrapper[19803]: E0313 01:17:33.130186 19803 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"openshift-kube-scheduler-master-0\" already exists" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:33.194377 master-0 kubenswrapper[19803]: I0313 01:17:33.194298 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:33.194377 master-0 kubenswrapper[19803]: I0313 01:17:33.194359 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:17:33.194377 master-0 kubenswrapper[19803]: I0313 01:17:33.194377 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.194377 master-0 kubenswrapper[19803]: I0313 01:17:33.194395 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194413 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194432 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194577 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-etc-kube\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194668 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194729 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194736 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194799 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194819 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194844 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.194841 master-0 kubenswrapper[19803]: I0313 01:17:33.194790 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194874 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194877 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194908 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194909 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194927 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194941 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194909 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194931 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.194996 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e9add8df47182fc2eaf8cd78016ebe72-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-master-0\" (UID: \"e9add8df47182fc2eaf8cd78016ebe72\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195027 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195074 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195102 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195135 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195098 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195180 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195198 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195203 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195242 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195243 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195260 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195284 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195306 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195336 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195372 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"etcd-master-0\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:17:33.195408 master-0 kubenswrapper[19803]: I0313 01:17:33.195415 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:33.197157 master-0 kubenswrapper[19803]: I0313 01:17:33.197066 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:33.204651 master-0 kubenswrapper[19803]: I0313 01:17:33.204550 19803 apiserver.go:52] "Watching apiserver" Mar 13 01:17:33.227092 master-0 kubenswrapper[19803]: I0313 01:17:33.226993 19803 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 01:17:33.236492 master-0 kubenswrapper[19803]: I0313 01:17:33.236354 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9","openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c","openshift-kube-scheduler/installer-4-master-0","openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn","openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg","openshift-etcd/installer-1-master-0","openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt","openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb","openshift-network-operator/iptables-alerter-mkkgg","kube-system/bootstrap-kube-controller-manager-master-0","openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s","openshift-network-operator/network-operator-7c649bf6d4-4zrk7","openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t","openshift-ingress-operator/ingress-operator-677db989d6-p5c8r","openshift-kube-scheduler/openshift-kube-scheduler-master-0","openshift-marketplace/redhat-marketplace-cx58l","openshift-multus/multus-admission-controller-8d675b596-ddtwn","openshift-multus/multus-xk75p","openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn","openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz","openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2","openshift-kube-apiserver/installer-1-master-0","openshift-network-node-identity/network-node-identity-mcps9","openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g","openshift-controller-manager/controller-manager-7f46d696f9-s9d6s","openshift-machine-config-operator/machine-config-daemon-fprhw","kube-system/bootstrap-kube-scheduler-master-0","openshift-dns/dns-default-pfsjd","openshift-dns-operator/dns-operator-589895fbb7-wb6qq","openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5","openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp","openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk","openshift-marketplace/community-operators-zglhp","openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h","openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8","openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm","openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v","openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7","openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6","openshift-machine-config-operator/kube-rbac-proxy-crio-master-0","openshift-network-diagnostics/network-check-target-49pfj","openshift-apiserver/apiserver-7dbfb86fbb-mc7xz","openshift-kube-controller-manager/installer-2-master-0","openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9","openshift-marketplace/marketplace-operator-64bf9778cb-bx29h","openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252","openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg","openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4","openshift-dns/node-resolver-xmwg6","openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8","openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8","openshift-network-diagnostics/network-check-source-7c67b67d47-xd626","openshift-ovn-kubernetes/ovnkube-node-nlhbx","openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld","openshift-ingress/router-default-79f8cd6fdd-kzq6q","openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf","openshift-kube-apiserver/bootstrap-kube-apiserver-master-0","openshift-multus/network-metrics-daemon-9hwz9","openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0","openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h","openshift-cluster-node-tuning-operator/tuned-p9mnd","openshift-config-operator/openshift-config-operator-64488f9d78-trr9r","openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl","openshift-kube-apiserver/kube-apiserver-master-0","openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq","assisted-installer/assisted-installer-controller-qztx6","openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7","openshift-etcd/etcd-master-0","openshift-kube-controller-manager/kube-controller-manager-master-0","openshift-marketplace/certified-operators-64xrl","openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l","openshift-multus/multus-additional-cni-plugins-mjh5s","openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg","openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8","openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st","openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp","openshift-service-ca/service-ca-84bfdbbb7f-n9vpf","openshift-insights/insights-operator-8f89dfddd-hn4jh","openshift-marketplace/redhat-operators-d9nkp"] Mar 13 01:17:33.239413 master-0 kubenswrapper[19803]: I0313 01:17:33.239361 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="assisted-installer/assisted-installer-controller-qztx6" Mar 13 01:17:33.241576 master-0 kubenswrapper[19803]: I0313 01:17:33.241496 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 01:17:33.242865 master-0 kubenswrapper[19803]: I0313 01:17:33.242831 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 01:17:33.243408 master-0 kubenswrapper[19803]: I0313 01:17:33.243381 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 01:17:33.244812 master-0 kubenswrapper[19803]: I0313 01:17:33.244783 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.246886 master-0 kubenswrapper[19803]: I0313 01:17:33.244820 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 01:17:33.247036 master-0 kubenswrapper[19803]: I0313 01:17:33.246919 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 01:17:33.247219 master-0 kubenswrapper[19803]: I0313 01:17:33.247158 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 01:17:33.247590 master-0 kubenswrapper[19803]: I0313 01:17:33.247496 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 01:17:33.249140 master-0 kubenswrapper[19803]: I0313 01:17:33.249087 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 01:17:33.249248 master-0 kubenswrapper[19803]: I0313 01:17:33.249160 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.249309 master-0 kubenswrapper[19803]: I0313 01:17:33.249245 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:17:33.249388 master-0 kubenswrapper[19803]: I0313 01:17:33.249300 19803 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e5815d77-bfd4-459e-9678-c08ac790805d" Mar 13 01:17:33.249500 master-0 kubenswrapper[19803]: I0313 01:17:33.249457 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.249608 master-0 kubenswrapper[19803]: I0313 01:17:33.249579 19803 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="397f933d-d01c-48f5-905c-aaf9a01c8b0a" Mar 13 01:17:33.249673 master-0 kubenswrapper[19803]: I0313 01:17:33.249648 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.249787 master-0 kubenswrapper[19803]: I0313 01:17:33.249742 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 01:17:33.249937 master-0 kubenswrapper[19803]: I0313 01:17:33.249892 19803 kubelet.go:2566] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="32b55bbd-f227-4444-94a9-28a06b9b2f01" Mar 13 01:17:33.249937 master-0 kubenswrapper[19803]: I0313 01:17:33.249923 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.249937 master-0 kubenswrapper[19803]: I0313 01:17:33.249930 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 01:17:33.250118 master-0 kubenswrapper[19803]: I0313 01:17:33.249944 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 01:17:33.250118 master-0 kubenswrapper[19803]: I0313 01:17:33.249967 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 01:17:33.250118 master-0 kubenswrapper[19803]: I0313 01:17:33.250116 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 01:17:33.250301 master-0 kubenswrapper[19803]: I0313 01:17:33.250280 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 01:17:33.252190 master-0 kubenswrapper[19803]: I0313 01:17:33.252131 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.253762 master-0 kubenswrapper[19803]: I0313 01:17:33.253719 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 01:17:33.265787 master-0 kubenswrapper[19803]: I0313 01:17:33.264744 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-1-master-0" Mar 13 01:17:33.268721 master-0 kubenswrapper[19803]: I0313 01:17:33.266489 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 01:17:33.268721 master-0 kubenswrapper[19803]: I0313 01:17:33.268341 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 01:17:33.268721 master-0 kubenswrapper[19803]: I0313 01:17:33.268489 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.268721 master-0 kubenswrapper[19803]: I0313 01:17:33.268649 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 01:17:33.269169 master-0 kubenswrapper[19803]: I0313 01:17:33.268758 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 01:17:33.269169 master-0 kubenswrapper[19803]: I0313 01:17:33.268863 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 01:17:33.272270 master-0 kubenswrapper[19803]: I0313 01:17:33.272158 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 01:17:33.273071 master-0 kubenswrapper[19803]: I0313 01:17:33.272956 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 01:17:33.273284 master-0 kubenswrapper[19803]: I0313 01:17:33.273237 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 01:17:33.282916 master-0 kubenswrapper[19803]: I0313 01:17:33.282650 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 01:17:33.284134 master-0 kubenswrapper[19803]: I0313 01:17:33.284076 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 01:17:33.285979 master-0 kubenswrapper[19803]: I0313 01:17:33.285939 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 01:17:33.286247 master-0 kubenswrapper[19803]: I0313 01:17:33.286215 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 01:17:33.286617 master-0 kubenswrapper[19803]: I0313 01:17:33.286578 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.286855 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.287091 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.287886 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.288325 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.288444 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.288832 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.289249 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.289396 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.289608 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.289688 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.291184 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.289788 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.290062 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.290278 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 01:17:33.291920 master-0 kubenswrapper[19803]: I0313 01:17:33.290634 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295283 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295635 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295667 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8l9r\" (UniqueName: \"kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295689 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295710 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ztdc9\" (UniqueName: \"kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295765 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295787 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295805 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295826 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295849 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-modprobe-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295868 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-sys\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295888 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295908 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qmr7z\" (UniqueName: \"kubernetes.io/projected/f771149b-9d62-408e-be6f-72f575b1ec42-kube-api-access-qmr7z\") pod \"migrator-57ccdf9b5-5zsh9\" (UID: \"f771149b-9d62-408e-be6f-72f575b1ec42\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295930 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-serving-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295953 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.295974 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.296000 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.296020 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.296039 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.296061 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkzq\" (UniqueName: \"kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.296085 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:33.296322 master-0 kubenswrapper[19803]: I0313 01:17:33.296103 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296549 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296564 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/49a28ab7-1176-4213-b037-19fe18bbe57b-ovn-node-metrics-cert\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296107 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2jgj\" (UniqueName: \"kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj\") pod \"csi-snapshot-controller-operator-5685fbc7d-478l8\" (UID: \"d163333f-fda5-4067-ad7c-6f646ae411c8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296718 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296761 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-trusted-ca-bundle\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296792 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wds6q\" (UniqueName: \"kubernetes.io/projected/95c7493b-ad9d-490e-83f3-aa28750b2b5e-kube-api-access-wds6q\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296818 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-node-pullsecrets\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296846 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-encryption-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296901 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296930 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296947 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/75a53c09-210a-4346-99b0-a632b9e0a3c9-metrics-tls\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296959 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.296988 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297084 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297137 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297146 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297162 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297184 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297208 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297228 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297247 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.297324 master-0 kubenswrapper[19803]: I0313 01:17:33.297341 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/6fd82994-f4d4-49e9-8742-07e206322e76-available-featuregates\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297361 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297448 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98t5h\" (UniqueName: \"kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297166 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297595 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297626 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297653 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297675 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-conf\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297694 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297717 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhk76\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297741 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297762 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297783 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5xmqc\" (UniqueName: \"kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297803 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297355 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297822 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297844 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c7493b-ad9d-490e-83f3-aa28750b2b5e-config-volume\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297891 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vccjz\" (UniqueName: \"kubernetes.io/projected/0caabde8-d49a-431d-afe5-8b283188c11c-kube-api-access-vccjz\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297909 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit-dir\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297930 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297949 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.297973 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lz8ww\" (UniqueName: \"kubernetes.io/projected/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-kube-api-access-lz8ww\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298094 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298121 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298144 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298165 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298193 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-845hm\" (UniqueName: \"kubernetes.io/projected/b74de987-7962-425e-9447-24b285eb888f-kube-api-access-845hm\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298214 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-serving-cert\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298237 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.298208 master-0 kubenswrapper[19803]: I0313 01:17:33.298260 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.298281 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.298545 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.298798 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"whereabouts-configmap\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-whereabouts-configmap\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.298874 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-config\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299209 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-serving-cert\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.298281 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c687237e-50e5-405d-8fef-0efbc3866630-webhook-cert\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299368 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d89b5d71-5522-433e-a0bb-f2767332e744-signing-cabundle\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299387 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299405 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299427 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n58nf\" (UniqueName: \"kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299445 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-tmp\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299843 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299936 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-tmp\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299967 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2dlx5\" (UniqueName: \"kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.299988 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300009 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300029 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmnh2\" (UniqueName: \"kubernetes.io/projected/d89b5d71-5522-433e-a0bb-f2767332e744-kube-api-access-lmnh2\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300047 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqfj5\" (UniqueName: \"kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300066 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4bzs5\" (UniqueName: \"kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300086 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300111 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd264af8-4ced-40c4-b4f6-202bab42d0cb-hosts-file\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300137 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/75a53c09-210a-4346-99b0-a632b9e0a3c9-trusted-ca\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300448 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-client\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300540 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj7cp\" (UniqueName: \"kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300591 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300611 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300637 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300628 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300696 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzxv5\" (UniqueName: \"kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300743 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-var-lib-kubelet\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300767 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300786 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300804 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300821 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300843 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300863 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300885 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300903 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-txxbg\" (UniqueName: \"kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300922 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.300942 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4b8jr\" (UniqueName: \"kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301091 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301109 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-run\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301128 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-metrics-certs\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301144 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301162 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysconfig\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301263 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zpdjh\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301280 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301297 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301317 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301334 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-systemd\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301353 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301371 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301389 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smhrl\" (UniqueName: \"kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301407 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301426 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301445 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rg4g\" (UniqueName: \"kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301464 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-etc-tuned\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301482 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sjkgv\" (UniqueName: \"kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301501 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301537 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6wzz\" (UniqueName: \"kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301556 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k5gc8\" (UniqueName: \"kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301577 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-image-import-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301595 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-kubernetes\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.301615 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zzqj\" (UniqueName: \"kubernetes.io/projected/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a-kube-api-access-5zzqj\") pod \"csi-snapshot-controller-7577d6f48-bj5ld\" (UID: \"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.302303 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.302327 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jc8xs\" (UniqueName: \"kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.302349 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:17:33.302322 master-0 kubenswrapper[19803]: I0313 01:17:33.302370 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d89b5d71-5522-433e-a0bb-f2767332e744-signing-key\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.302870 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b4qsk\" (UniqueName: \"kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.302898 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.302936 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.302954 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.302972 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.302992 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303261 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303281 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303298 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303318 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hngc8\" (UniqueName: \"kubernetes.io/projected/2ec42095-36f5-48cf-af9d-e7a60f6cb121-kube-api-access-hngc8\") pod \"network-check-source-7c67b67d47-xd626\" (UID: \"2ec42095-36f5-48cf-af9d-e7a60f6cb121\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303337 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303356 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303373 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303392 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303435 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303455 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xcf2h\" (UniqueName: \"kubernetes.io/projected/bd264af8-4ced-40c4-b4f6-202bab42d0cb-kube-api-access-xcf2h\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303476 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303498 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303555 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303574 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303593 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303613 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-stats-auth\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303631 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303650 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303673 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303692 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-default-certificate\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303710 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303744 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303764 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303933 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303958 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rrvhw\" (UniqueName: \"kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303976 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caabde8-d49a-431d-afe5-8b283188c11c-service-ca-bundle\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.303995 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-host\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304014 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304035 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304053 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304071 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304091 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304131 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304150 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fz9qf\" (UniqueName: \"kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304169 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304189 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304208 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304227 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304244 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304260 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-lib-modules\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304280 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304297 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304318 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304341 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304605 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-monitoring-operator-tls\" (UniqueName: \"kubernetes.io/secret/46015913-c499-49b1-a9f6-a61c6e96b13f-cluster-monitoring-operator-tls\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.304814 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/74efa52b-fd97-418a-9a44-914442633f74-serving-cert\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.305060 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-env-overrides\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.305481 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.305534 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-config\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.305764 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.306574 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311150 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-serving-cert\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311161 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c6db75e5-efd1-4bfa-9941-0934d7621ba2-serving-cert\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311198 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.310296 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.310169 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311326 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fbfc2caf-126e-41b9-9b31-05f7a45d8536-config\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311485 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/91fc568a-61ad-400e-a54e-21d62e51bb17-trusted-ca\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311560 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-metrics-tls\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311585 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.310377 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311710 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-service-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311753 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311866 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311863 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-daemon-config\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.311937 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-client\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312090 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-env-overrides\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312114 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312193 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operand-assets\" (UniqueName: \"kubernetes.io/empty-dir/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-operand-assets\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312209 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6fd82994-f4d4-49e9-8742-07e206322e76-serving-cert\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312214 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312263 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312397 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312467 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312470 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8c377a67-e763-4925-afae-a7f8546a369b-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312498 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312632 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312836 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-tuned\" (UniqueName: \"kubernetes.io/empty-dir/b74de987-7962-425e-9447-24b285eb888f-etc-tuned\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.312995 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c6db75e5-efd1-4bfa-9941-0934d7621ba2-config\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.313114 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/7d874a21-43aa-4d81-b904-853fb3da5a94-metrics-tls\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.313392 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-config\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.313701 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-tuning-operator-tls\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-node-tuning-operator-tls\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.313905 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-env-overrides\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.314145 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/c687237e-50e5-405d-8fef-0efbc3866630-ovnkube-identity-cm\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.310950 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8c377a67-e763-4925-afae-a7f8546a369b-ovnkube-config\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.314326 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-service-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.314445 master-0 kubenswrapper[19803]: I0313 01:17:33.314478 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-config\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.314848 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-operator-metrics\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.314995 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/96b67a99-eada-44d7-93eb-cc3ced777fc6-config\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.315158 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/91fc568a-61ad-400e-a54e-21d62e51bb17-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.315321 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-config\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.315549 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/74efa52b-fd97-418a-9a44-914442633f74-config\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.315844 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.317327 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.317449 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.318905 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/6ad2904e-ece9-4d72-8683-c3e691e07497-srv-cert\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.318978 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.319223 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.319283 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8ad2a6d5-6edf-4840-89f9-47847c8dac05-marketplace-trusted-ca\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.319317 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.319655 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.319971 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-etcd-ca\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.320109 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.320216 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.320317 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.320436 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:17:33.320873 master-0 kubenswrapper[19803]: I0313 01:17:33.320560 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 01:17:33.321751 master-0 kubenswrapper[19803]: I0313 01:17:33.321216 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 01:17:33.321751 master-0 kubenswrapper[19803]: I0313 01:17:33.321539 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 01:17:33.321751 master-0 kubenswrapper[19803]: I0313 01:17:33.321701 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 01:17:33.321916 master-0 kubenswrapper[19803]: I0313 01:17:33.321835 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/31f19d97-50f9-4486-a8f9-df61ef2b0528-srv-cert\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.322826 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.323014 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.323088 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.323356 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.323470 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fde89b0b-7133-4b97-9e35-51c0382bd366-config\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.323492 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.323827 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.323860 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.324092 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.324260 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.324763 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b5757329-8692-4719-b3c7-b5df78110fcf-trusted-ca-bundle\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.324948 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/d89b5d71-5522-433e-a0bb-f2767332e744-signing-key\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.325198 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fde89b0b-7133-4b97-9e35-51c0382bd366-serving-cert\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.325406 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cni-binary-copy\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.325645 master-0 kubenswrapper[19803]: I0313 01:17:33.325546 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-apiservice-cert\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.325965 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/de46c12a-aa3e-442e-bcc4-365d05f50103-cni-binary-copy\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.327993 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-olm-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-cluster-olm-operator-serving-cert\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.328093 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.328216 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b5757329-8692-4719-b3c7-b5df78110fcf-serving-cert\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.328621 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.328682 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-trusted-ca\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.328839 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/fbfc2caf-126e-41b9-9b31-05f7a45d8536-serving-cert\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.328990 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/96b67a99-eada-44d7-93eb-cc3ced777fc6-serving-cert\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.329047 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-metrics-certs\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.329153 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-serving-cert\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.329614 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemetry-config\" (UniqueName: \"kubernetes.io/configmap/46015913-c499-49b1-a9f6-a61c6e96b13f-telemetry-config\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:17:33.330940 master-0 kubenswrapper[19803]: I0313 01:17:33.330036 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 01:17:33.332364 master-0 kubenswrapper[19803]: I0313 01:17:33.332010 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-package-server-manager-serving-cert\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:17:33.332364 master-0 kubenswrapper[19803]: I0313 01:17:33.332138 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 01:17:33.336998 master-0 kubenswrapper[19803]: I0313 01:17:33.336946 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/49a28ab7-1176-4213-b037-19fe18bbe57b-ovnkube-script-lib\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.344362 master-0 kubenswrapper[19803]: I0313 01:17:33.344237 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/d89b5d71-5522-433e-a0bb-f2767332e744-signing-cabundle\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:17:33.345049 master-0 kubenswrapper[19803]: I0313 01:17:33.344993 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 01:17:33.362034 master-0 kubenswrapper[19803]: I0313 01:17:33.361749 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 01:17:33.362323 master-0 kubenswrapper[19803]: I0313 01:17:33.362094 19803 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Mar 13 01:17:33.370070 master-0 kubenswrapper[19803]: I0313 01:17:33.369991 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/69da0e58-2ae6-4d4b-b125-77e93df3d660-iptables-alerter-script\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:17:33.405690 master-0 kubenswrapper[19803]: I0313 01:17:33.405379 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 01:17:33.407011 master-0 kubenswrapper[19803]: I0313 01:17:33.406952 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:17:33.407444 master-0 kubenswrapper[19803]: I0313 01:17:33.407193 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.407444 master-0 kubenswrapper[19803]: I0313 01:17:33.407243 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.407444 master-0 kubenswrapper[19803]: I0313 01:17:33.407286 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-utilities\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:33.407444 master-0 kubenswrapper[19803]: I0313 01:17:33.407407 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-utilities\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:33.407444 master-0 kubenswrapper[19803]: I0313 01:17:33.407427 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:33.407631 master-0 kubenswrapper[19803]: I0313 01:17:33.407486 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.407631 master-0 kubenswrapper[19803]: I0313 01:17:33.407536 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.407631 master-0 kubenswrapper[19803]: I0313 01:17:33.407568 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-var-lib-kubelet\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.407725 master-0 kubenswrapper[19803]: I0313 01:17:33.407636 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-var-lib-kubelet\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.407725 master-0 kubenswrapper[19803]: I0313 01:17:33.407637 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-netns\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.407725 master-0 kubenswrapper[19803]: I0313 01:17:33.407669 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:33.407725 master-0 kubenswrapper[19803]: I0313 01:17:33.407701 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdpt2\" (UniqueName: \"kubernetes.io/projected/3418d0fb-d0ae-4634-a645-dc387a19147f-kube-api-access-tdpt2\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:33.407865 master-0 kubenswrapper[19803]: I0313 01:17:33.407731 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.407865 master-0 kubenswrapper[19803]: I0313 01:17:33.407760 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-run\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.407865 master-0 kubenswrapper[19803]: I0313 01:17:33.407827 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pgz5w\" (UniqueName: \"kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:33.407952 master-0 kubenswrapper[19803]: I0313 01:17:33.407880 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gh7ks\" (UniqueName: \"kubernetes.io/projected/2581e5b5-8cbb-4fa5-9888-98fb572a6232-kube-api-access-gh7ks\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:33.407952 master-0 kubenswrapper[19803]: I0313 01:17:33.407918 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:33.407952 master-0 kubenswrapper[19803]: I0313 01:17:33.407941 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.408057 master-0 kubenswrapper[19803]: I0313 01:17:33.407966 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:33.408187 master-0 kubenswrapper[19803]: I0313 01:17:33.408117 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-run\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.408232 master-0 kubenswrapper[19803]: I0313 01:17:33.408188 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-44dmt\" (UniqueName: \"kubernetes.io/projected/9863f7ff-4c8d-42a3-a822-01697cf9c920-kube-api-access-44dmt\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:33.408394 master-0 kubenswrapper[19803]: I0313 01:17:33.408269 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvckz\" (UniqueName: \"kubernetes.io/projected/fb5dee36-70a4-47a4-afc2-d3209a476362-kube-api-access-mvckz\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:33.408394 master-0 kubenswrapper[19803]: I0313 01:17:33.408314 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kvmpk\" (UniqueName: \"kubernetes.io/projected/7e938267-de1f-46f7-bf78-b0b3e810c4fa-kube-api-access-kvmpk\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:33.408394 master-0 kubenswrapper[19803]: I0313 01:17:33.408363 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-catalog-content\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:33.408530 master-0 kubenswrapper[19803]: I0313 01:17:33.408411 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.408530 master-0 kubenswrapper[19803]: I0313 01:17:33.408440 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca06fac5-6707-4521-88ce-1768fede42c2-tmpfs\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:33.408530 master-0 kubenswrapper[19803]: I0313 01:17:33.408467 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-serving-ca\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.408530 master-0 kubenswrapper[19803]: I0313 01:17:33.408478 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9863f7ff-4c8d-42a3-a822-01697cf9c920-catalog-content\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:33.408530 master-0 kubenswrapper[19803]: I0313 01:17:33.408491 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-kubernetes\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.408715 master-0 kubenswrapper[19803]: I0313 01:17:33.408547 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:33.408715 master-0 kubenswrapper[19803]: I0313 01:17:33.408596 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/ca06fac5-6707-4521-88ce-1768fede42c2-tmpfs\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:33.408715 master-0 kubenswrapper[19803]: I0313 01:17:33.408598 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-client\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.408715 master-0 kubenswrapper[19803]: I0313 01:17:33.408648 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-98t7n\" (UniqueName: \"kubernetes.io/projected/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-kube-api-access-98t7n\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:17:33.408830 master-0 kubenswrapper[19803]: I0313 01:17:33.408789 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.408830 master-0 kubenswrapper[19803]: I0313 01:17:33.408823 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.408895 master-0 kubenswrapper[19803]: I0313 01:17:33.408865 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:33.408933 master-0 kubenswrapper[19803]: I0313 01:17:33.408906 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-utilities\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:33.408961 master-0 kubenswrapper[19803]: I0313 01:17:33.408930 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.408961 master-0 kubenswrapper[19803]: I0313 01:17:33.408955 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409021 master-0 kubenswrapper[19803]: I0313 01:17:33.408978 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-systemd-units\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.408983 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409068 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-kubernetes\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409078 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-system-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409095 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-encryption-config\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409119 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409144 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409149 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-socket-dir-parent\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409167 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409186 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409209 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-utilities\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409221 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409213 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-os-release\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409315 master-0 kubenswrapper[19803]: I0313 01:17:33.409287 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-system-cni-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.409704 master-0 kubenswrapper[19803]: I0313 01:17:33.409364 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-k8s-cni-cncf-io\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409704 master-0 kubenswrapper[19803]: I0313 01:17:33.409437 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvrdt\" (UniqueName: \"kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:33.409704 master-0 kubenswrapper[19803]: I0313 01:17:33.409464 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:33.409704 master-0 kubenswrapper[19803]: I0313 01:17:33.409489 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:33.409811 master-0 kubenswrapper[19803]: I0313 01:17:33.409760 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:33.409841 master-0 kubenswrapper[19803]: I0313 01:17:33.409820 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.409917 master-0 kubenswrapper[19803]: I0313 01:17:33.409866 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409917 master-0 kubenswrapper[19803]: I0313 01:17:33.409822 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-hostroot\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409995 master-0 kubenswrapper[19803]: I0313 01:17:33.409914 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-etc-kubernetes\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.409995 master-0 kubenswrapper[19803]: I0313 01:17:33.409953 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2pt2w\" (UniqueName: \"kubernetes.io/projected/ca06fac5-6707-4521-88ce-1768fede42c2-kube-api-access-2pt2w\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:33.409995 master-0 kubenswrapper[19803]: I0313 01:17:33.409989 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:33.410073 master-0 kubenswrapper[19803]: I0313 01:17:33.410036 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-modprobe-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.410073 master-0 kubenswrapper[19803]: I0313 01:17:33.410060 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.410126 master-0 kubenswrapper[19803]: I0313 01:17:33.410083 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.410126 master-0 kubenswrapper[19803]: I0313 01:17:33.410108 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-catalog-content\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:33.410181 master-0 kubenswrapper[19803]: I0313 01:17:33.410136 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.410181 master-0 kubenswrapper[19803]: I0313 01:17:33.410160 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.410241 master-0 kubenswrapper[19803]: I0313 01:17:33.410184 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:33.410241 master-0 kubenswrapper[19803]: I0313 01:17:33.410226 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.410292 master-0 kubenswrapper[19803]: I0313 01:17:33.410249 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.410292 master-0 kubenswrapper[19803]: I0313 01:17:33.410272 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.410344 master-0 kubenswrapper[19803]: I0313 01:17:33.410305 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.410344 master-0 kubenswrapper[19803]: I0313 01:17:33.410330 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.410404 master-0 kubenswrapper[19803]: I0313 01:17:33.410354 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-conf\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.410436 master-0 kubenswrapper[19803]: I0313 01:17:33.410413 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.410464 master-0 kubenswrapper[19803]: I0313 01:17:33.410437 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.410491 master-0 kubenswrapper[19803]: I0313 01:17:33.410462 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6e799871-735a-44e8-8193-24c5bb388928-snapshots\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:33.410491 master-0 kubenswrapper[19803]: I0313 01:17:33.410484 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:33.410569 master-0 kubenswrapper[19803]: I0313 01:17:33.410543 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:33.410598 master-0 kubenswrapper[19803]: I0313 01:17:33.410570 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:33.410633 master-0 kubenswrapper[19803]: I0313 01:17:33.410596 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit-dir\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.410633 master-0 kubenswrapper[19803]: I0313 01:17:33.410620 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.410707 master-0 kubenswrapper[19803]: I0313 01:17:33.410644 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:33.410707 master-0 kubenswrapper[19803]: I0313 01:17:33.410667 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.410770 master-0 kubenswrapper[19803]: I0313 01:17:33.410734 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-multus-certs\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.410833 master-0 kubenswrapper[19803]: I0313 01:17:33.410799 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-kubelet\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.410833 master-0 kubenswrapper[19803]: I0313 01:17:33.410827 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit-dir\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.410944 master-0 kubenswrapper[19803]: I0313 01:17:33.410926 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.410981 master-0 kubenswrapper[19803]: I0313 01:17:33.410959 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:33.411016 master-0 kubenswrapper[19803]: I0313 01:17:33.410979 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-conf-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.411016 master-0 kubenswrapper[19803]: I0313 01:17:33.410995 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:33.411084 master-0 kubenswrapper[19803]: I0313 01:17:33.410999 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"snapshots\" (UniqueName: \"kubernetes.io/empty-dir/6e799871-735a-44e8-8193-24c5bb388928-snapshots\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:33.411084 master-0 kubenswrapper[19803]: I0313 01:17:33.411054 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-systemd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.411084 master-0 kubenswrapper[19803]: I0313 01:17:33.411076 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-netd\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.411161 master-0 kubenswrapper[19803]: I0313 01:17:33.411126 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-catalog-content\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:33.411220 master-0 kubenswrapper[19803]: I0313 01:17:33.411187 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-ovn\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.411255 master-0 kubenswrapper[19803]: I0313 01:17:33.411214 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-cnibin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.411255 master-0 kubenswrapper[19803]: I0313 01:17:33.411210 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fb5dee36-70a4-47a4-afc2-d3209a476362-catalog-content\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:33.411309 master-0 kubenswrapper[19803]: I0313 01:17:33.411270 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-catalog-content\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:33.411336 master-0 kubenswrapper[19803]: I0313 01:17:33.411302 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.411369 master-0 kubenswrapper[19803]: I0313 01:17:33.411338 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-multus\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.411489 master-0 kubenswrapper[19803]: I0313 01:17:33.411465 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-modprobe-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-modprobe-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.411537 master-0 kubenswrapper[19803]: I0313 01:17:33.411467 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.411599 master-0 kubenswrapper[19803]: I0313 01:17:33.411579 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.411635 master-0 kubenswrapper[19803]: I0313 01:17:33.411616 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-conf\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-conf\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.411664 master-0 kubenswrapper[19803]: I0313 01:17:33.411644 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-run-netns\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.411766 master-0 kubenswrapper[19803]: I0313 01:17:33.411727 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.411852 master-0 kubenswrapper[19803]: I0313 01:17:33.411799 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.412266 master-0 kubenswrapper[19803]: I0313 01:17:33.412212 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-cni-bin\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.412308 master-0 kubenswrapper[19803]: I0313 01:17:33.411853 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-node-log\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.412365 master-0 kubenswrapper[19803]: I0313 01:17:33.412330 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:33.412402 master-0 kubenswrapper[19803]: I0313 01:17:33.412388 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:33.412503 master-0 kubenswrapper[19803]: I0313 01:17:33.412470 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd264af8-4ced-40c4-b4f6-202bab42d0cb-hosts-file\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:17:33.412776 master-0 kubenswrapper[19803]: I0313 01:17:33.412739 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:17:33.412959 master-0 kubenswrapper[19803]: I0313 01:17:33.412906 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/bd264af8-4ced-40c4-b4f6-202bab42d0cb-hosts-file\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:17:33.413023 master-0 kubenswrapper[19803]: I0313 01:17:33.412927 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/81835d51-a414-440f-889b-690561e98d6a-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.413087 master-0 kubenswrapper[19803]: I0313 01:17:33.413062 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:33.413135 master-0 kubenswrapper[19803]: I0313 01:17:33.413117 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:33.413181 master-0 kubenswrapper[19803]: I0313 01:17:33.413164 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-catalog-content\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:33.413225 master-0 kubenswrapper[19803]: I0313 01:17:33.413208 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9rtds\" (UniqueName: \"kubernetes.io/projected/6da2aac0-42a0-45c2-93ec-b148f5889e8b-kube-api-access-9rtds\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:33.413275 master-0 kubenswrapper[19803]: I0313 01:17:33.413256 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.413482 master-0 kubenswrapper[19803]: I0313 01:17:33.413426 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:17:33.413538 master-0 kubenswrapper[19803]: I0313 01:17:33.413491 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-catalog-content\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:33.413661 master-0 kubenswrapper[19803]: I0313 01:17:33.413618 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.413706 master-0 kubenswrapper[19803]: I0313 01:17:33.413683 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:17:33.413801 master-0 kubenswrapper[19803]: I0313 01:17:33.413776 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-host-etc-kube\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:17:33.413852 master-0 kubenswrapper[19803]: I0313 01:17:33.413834 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysconfig\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.413894 master-0 kubenswrapper[19803]: I0313 01:17:33.413876 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:33.413927 master-0 kubenswrapper[19803]: I0313 01:17:33.413909 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-cache\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.413956 master-0 kubenswrapper[19803]: I0313 01:17:33.413917 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:33.414053 master-0 kubenswrapper[19803]: I0313 01:17:33.414026 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysconfig\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysconfig\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.414145 master-0 kubenswrapper[19803]: I0313 01:17:33.414115 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sd26j\" (UniqueName: \"kubernetes.io/projected/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-kube-api-access-sd26j\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:33.414197 master-0 kubenswrapper[19803]: I0313 01:17:33.414174 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:33.414252 master-0 kubenswrapper[19803]: I0313 01:17:33.414230 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.414303 master-0 kubenswrapper[19803]: I0313 01:17:33.414281 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:33.414378 master-0 kubenswrapper[19803]: I0313 01:17:33.414355 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3418d0fb-d0ae-4634-a645-dc387a19147f-rootfs\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:33.414431 master-0 kubenswrapper[19803]: I0313 01:17:33.414409 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:33.414480 master-0 kubenswrapper[19803]: I0313 01:17:33.414458 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-serving-cert\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.414577 master-0 kubenswrapper[19803]: I0313 01:17:33.414547 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt62j\" (UniqueName: \"kubernetes.io/projected/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-kube-api-access-vt62j\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:33.414643 master-0 kubenswrapper[19803]: I0313 01:17:33.414619 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.414697 master-0 kubenswrapper[19803]: I0313 01:17:33.414675 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-systemd\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.414741 master-0 kubenswrapper[19803]: I0313 01:17:33.414719 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g8n5d\" (UniqueName: \"kubernetes.io/projected/c55a215a-9a95-4f48-8668-9b76503c3044-kube-api-access-g8n5d\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:33.414819 master-0 kubenswrapper[19803]: I0313 01:17:33.414797 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jthxn\" (UniqueName: \"kubernetes.io/projected/6e799871-735a-44e8-8193-24c5bb388928-kube-api-access-jthxn\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:33.414892 master-0 kubenswrapper[19803]: I0313 01:17:33.414868 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:33.414992 master-0 kubenswrapper[19803]: I0313 01:17:33.414968 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qq6v6\" (UniqueName: \"kubernetes.io/projected/9d2f93bd-e4ce-4ed2-b249-946338f753ed-kube-api-access-qq6v6\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:33.415100 master-0 kubenswrapper[19803]: I0313 01:17:33.415034 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.415100 master-0 kubenswrapper[19803]: I0313 01:17:33.415089 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g89p7\" (UniqueName: \"kubernetes.io/projected/56e20b21-ba17-46ae-a740-0e7bd45eae5f-kube-api-access-g89p7\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:17:33.415163 master-0 kubenswrapper[19803]: I0313 01:17:33.415148 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-dir\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.415217 master-0 kubenswrapper[19803]: I0313 01:17:33.415194 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.415268 master-0 kubenswrapper[19803]: I0313 01:17:33.415245 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.415319 master-0 kubenswrapper[19803]: I0313 01:17:33.415297 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-policies\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.415359 master-0 kubenswrapper[19803]: I0313 01:17:33.415341 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.415411 master-0 kubenswrapper[19803]: I0313 01:17:33.415389 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.415501 master-0 kubenswrapper[19803]: I0313 01:17:33.415475 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.415655 master-0 kubenswrapper[19803]: I0313 01:17:33.415612 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pt5g7\" (UniqueName: \"kubernetes.io/projected/536a2de1-e13c-47d1-b61d-88e0a5fd2851-kube-api-access-pt5g7\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.415717 master-0 kubenswrapper[19803]: I0313 01:17:33.415692 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4lqgs\" (UniqueName: \"kubernetes.io/projected/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-kube-api-access-4lqgs\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:33.415845 master-0 kubenswrapper[19803]: I0313 01:17:33.415819 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.415910 master-0 kubenswrapper[19803]: I0313 01:17:33.415887 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-host\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.415964 master-0 kubenswrapper[19803]: I0313 01:17:33.415941 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.416013 master-0 kubenswrapper[19803]: I0313 01:17:33.415990 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-psvcz\" (UniqueName: \"kubernetes.io/projected/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-kube-api-access-psvcz\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:17:33.416086 master-0 kubenswrapper[19803]: I0313 01:17:33.416065 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.416222 master-0 kubenswrapper[19803]: I0313 01:17:33.416167 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.416222 master-0 kubenswrapper[19803]: I0313 01:17:33.416217 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.416298 master-0 kubenswrapper[19803]: I0313 01:17:33.416263 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-lib-modules\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.416830 master-0 kubenswrapper[19803]: I0313 01:17:33.416791 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-var-lib-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.416830 master-0 kubenswrapper[19803]: I0313 01:17:33.416815 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-sys\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.417137 master-0 kubenswrapper[19803]: I0313 01:17:33.417107 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-tuning-conf-dir\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.417192 master-0 kubenswrapper[19803]: I0313 01:17:33.417167 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-systemd\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-systemd\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.417378 master-0 kubenswrapper[19803]: I0313 01:17:33.417335 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:33.417422 master-0 kubenswrapper[19803]: I0313 01:17:33.417404 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-trusted-ca-bundle\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.417701 master-0 kubenswrapper[19803]: I0313 01:17:33.417666 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-multus-cni-dir\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.417809 master-0 kubenswrapper[19803]: I0313 01:17:33.417774 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-host\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.417914 master-0 kubenswrapper[19803]: I0313 01:17:33.417885 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-etc-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.418131 master-0 kubenswrapper[19803]: I0313 01:17:33.418092 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-sys\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.418170 master-0 kubenswrapper[19803]: I0313 01:17:33.418123 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-lib-modules\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.418170 master-0 kubenswrapper[19803]: I0313 01:17:33.418166 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-kubelet\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.418223 master-0 kubenswrapper[19803]: I0313 01:17:33.417343 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/de46c12a-aa3e-442e-bcc4-365d05f50103-host-var-lib-cni-bin\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:33.418259 master-0 kubenswrapper[19803]: I0313 01:17:33.418247 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-os-release\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.418324 master-0 kubenswrapper[19803]: I0313 01:17:33.418294 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-run-openvswitch\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.418425 master-0 kubenswrapper[19803]: I0313 01:17:33.418402 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-cnibin\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:33.422148 master-0 kubenswrapper[19803]: I0313 01:17:33.422098 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-rhk4l\" (UID: \"0ff72b58-aca9-46f1-86ca-da8339734ac9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:33.422229 master-0 kubenswrapper[19803]: I0313 01:17:33.422211 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.422281 master-0 kubenswrapper[19803]: I0313 01:17:33.422264 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.422393 master-0 kubenswrapper[19803]: I0313 01:17:33.422368 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9npsh\" (UniqueName: \"kubernetes.io/projected/21110b48-25fc-434a-b156-7f6bd6064bed-kube-api-access-9npsh\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:33.422442 master-0 kubenswrapper[19803]: I0313 01:17:33.422424 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:33.422547 master-0 kubenswrapper[19803]: I0313 01:17:33.422530 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-utilities\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:33.422604 master-0 kubenswrapper[19803]: I0313 01:17:33.422572 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.422709 master-0 kubenswrapper[19803]: I0313 01:17:33.422674 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-log-socket\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.422883 master-0 kubenswrapper[19803]: I0313 01:17:33.422823 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-node-pullsecrets\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.422934 master-0 kubenswrapper[19803]: I0313 01:17:33.422846 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6da2aac0-42a0-45c2-93ec-b148f5889e8b-utilities\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:33.422934 master-0 kubenswrapper[19803]: I0313 01:17:33.422922 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-node-pullsecrets\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.423026 master-0 kubenswrapper[19803]: I0313 01:17:33.422999 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:17:33.423128 master-0 kubenswrapper[19803]: I0313 01:17:33.423103 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.423308 master-0 kubenswrapper[19803]: I0313 01:17:33.423213 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/69da0e58-2ae6-4d4b-b125-77e93df3d660-host-slash\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:17:33.423308 master-0 kubenswrapper[19803]: I0313 01:17:33.423294 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-sysctl-d\" (UniqueName: \"kubernetes.io/host-path/b74de987-7962-425e-9447-24b285eb888f-etc-sysctl-d\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:33.423394 master-0 kubenswrapper[19803]: I0313 01:17:33.423337 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wld\" (UniqueName: \"kubernetes.io/projected/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-kube-api-access-t7wld\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:33.423446 master-0 kubenswrapper[19803]: I0313 01:17:33.423427 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.423541 master-0 kubenswrapper[19803]: I0313 01:17:33.423487 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-run-ovn-kubernetes\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.423597 master-0 kubenswrapper[19803]: I0313 01:17:33.423543 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-utilities\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:33.423684 master-0 kubenswrapper[19803]: I0313 01:17:33.423662 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.423732 master-0 kubenswrapper[19803]: I0313 01:17:33.423698 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:33.423732 master-0 kubenswrapper[19803]: I0313 01:17:33.423703 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9d2f93bd-e4ce-4ed2-b249-946338f753ed-utilities\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:33.423931 master-0 kubenswrapper[19803]: I0313 01:17:33.423907 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/49a28ab7-1176-4213-b037-19fe18bbe57b-host-slash\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:33.423977 master-0 kubenswrapper[19803]: I0313 01:17:33.423953 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbcg4\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-kube-api-access-nbcg4\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.424018 master-0 kubenswrapper[19803]: I0313 01:17:33.424006 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:33.424056 master-0 kubenswrapper[19803]: I0313 01:17:33.424032 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81835d51-a414-440f-889b-690561e98d6a-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.424114 master-0 kubenswrapper[19803]: I0313 01:17:33.424095 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:33.424154 master-0 kubenswrapper[19803]: I0313 01:17:33.424123 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:33.424193 master-0 kubenswrapper[19803]: I0313 01:17:33.424176 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nd8dv\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-kube-api-access-nd8dv\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.424272 master-0 kubenswrapper[19803]: I0313 01:17:33.424253 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:33.424396 master-0 kubenswrapper[19803]: I0313 01:17:33.424378 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/81835d51-a414-440f-889b-690561e98d6a-cache\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.425427 master-0 kubenswrapper[19803]: I0313 01:17:33.425392 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 01:17:33.441555 master-0 kubenswrapper[19803]: I0313 01:17:33.441492 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 01:17:33.490411 master-0 kubenswrapper[19803]: I0313 01:17:33.490323 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 01:17:33.493222 master-0 kubenswrapper[19803]: I0313 01:17:33.493182 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 01:17:33.496652 master-0 kubenswrapper[19803]: I0313 01:17:33.496614 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-client\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.501383 master-0 kubenswrapper[19803]: I0313 01:17:33.499504 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-serving-cert\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.507348 master-0 kubenswrapper[19803]: I0313 01:17:33.505958 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 01:17:33.511503 master-0 kubenswrapper[19803]: I0313 01:17:33.511456 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-encryption-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.525117 master-0 kubenswrapper[19803]: I0313 01:17:33.525030 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 01:17:33.526051 master-0 kubenswrapper[19803]: I0313 01:17:33.525977 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:33.526570 master-0 kubenswrapper[19803]: I0313 01:17:33.526217 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-host-etc-kube\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:33.526570 master-0 kubenswrapper[19803]: I0313 01:17:33.526291 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.526570 master-0 kubenswrapper[19803]: I0313 01:17:33.526346 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.526570 master-0 kubenswrapper[19803]: I0313 01:17:33.526453 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.526740 master-0 kubenswrapper[19803]: I0313 01:17:33.526641 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.526809 master-0 kubenswrapper[19803]: I0313 01:17:33.526768 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.526973 master-0 kubenswrapper[19803]: I0313 01:17:33.526913 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.527317 master-0 kubenswrapper[19803]: I0313 01:17:33.527292 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.527493 master-0 kubenswrapper[19803]: I0313 01:17:33.527409 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.527493 master-0 kubenswrapper[19803]: I0313 01:17:33.527426 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.527493 master-0 kubenswrapper[19803]: I0313 01:17:33.527459 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-ssl-certs\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.527493 master-0 kubenswrapper[19803]: I0313 01:17:33.527478 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.527666 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.527690 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.527734 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-docker\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.527896 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.528718 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-etc-cvo-updatepayloads\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.529410 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.529751 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3418d0fb-d0ae-4634-a645-dc387a19147f-rootfs\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.529874 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-dir\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.529899 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.529920 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.529993 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530049 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530190 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530292 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530360 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/3418d0fb-d0ae-4634-a645-dc387a19147f-rootfs\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530386 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-dir\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530418 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/81835d51-a414-440f-889b-690561e98d6a-etc-containers\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530444 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.530473 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-docker\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-docker\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.531302 master-0 kubenswrapper[19803]: I0313 01:17:33.531219 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-containers\" (UniqueName: \"kubernetes.io/host-path/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-etc-containers\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:33.537224 master-0 kubenswrapper[19803]: I0313 01:17:33.537125 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-image-import-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.541588 master-0 kubenswrapper[19803]: I0313 01:17:33.541379 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 01:17:33.548048 master-0 kubenswrapper[19803]: I0313 01:17:33.547987 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-etcd-serving-ca\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.568246 master-0 kubenswrapper[19803]: I0313 01:17:33.568170 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 01:17:33.569677 master-0 kubenswrapper[19803]: I0313 01:17:33.569628 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-trusted-ca-bundle\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.583557 master-0 kubenswrapper[19803]: I0313 01:17:33.581567 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 01:17:33.601652 master-0 kubenswrapper[19803]: I0313 01:17:33.601579 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 01:17:33.604904 master-0 kubenswrapper[19803]: I0313 01:17:33.602915 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-config\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.613285 master-0 kubenswrapper[19803]: I0313 01:17:33.613210 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.613382 master-0 kubenswrapper[19803]: I0313 01:17:33.613295 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.613699 master-0 kubenswrapper[19803]: I0313 01:17:33.613626 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.621771 master-0 kubenswrapper[19803]: I0313 01:17:33.621721 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 01:17:33.625776 master-0 kubenswrapper[19803]: I0313 01:17:33.624309 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:33.625880 master-0 kubenswrapper[19803]: I0313 01:17:33.625836 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-audit\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:33.630849 master-0 kubenswrapper[19803]: I0313 01:17:33.630816 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:33.641673 master-0 kubenswrapper[19803]: I0313 01:17:33.641236 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 01:17:33.651838 master-0 kubenswrapper[19803]: I0313 01:17:33.651791 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:33.652035 master-0 kubenswrapper[19803]: I0313 01:17:33.651991 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/95c7493b-ad9d-490e-83f3-aa28750b2b5e-config-volume\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:33.670838 master-0 kubenswrapper[19803]: I0313 01:17:33.666491 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 01:17:33.670838 master-0 kubenswrapper[19803]: I0313 01:17:33.670092 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/95c7493b-ad9d-490e-83f3-aa28750b2b5e-metrics-tls\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:33.682749 master-0 kubenswrapper[19803]: I0313 01:17:33.682674 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 01:17:33.701330 master-0 kubenswrapper[19803]: I0313 01:17:33.701260 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 01:17:33.722412 master-0 kubenswrapper[19803]: I0313 01:17:33.722344 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 01:17:33.723554 master-0 kubenswrapper[19803]: I0313 01:17:33.723527 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0caabde8-d49a-431d-afe5-8b283188c11c-service-ca-bundle\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.734528 master-0 kubenswrapper[19803]: I0313 01:17:33.734415 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") pod \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " Mar 13 01:17:33.734809 master-0 kubenswrapper[19803]: I0313 01:17:33.734628 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") pod \"7106c6fe-7c8d-45b9-bc5c-521db743663f\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " Mar 13 01:17:33.734809 master-0 kubenswrapper[19803]: I0313 01:17:33.734666 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") pod \"7106c6fe-7c8d-45b9-bc5c-521db743663f\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " Mar 13 01:17:33.734809 master-0 kubenswrapper[19803]: I0313 01:17:33.734621 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock" (OuterVolumeSpecName: "var-lock") pod "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:33.734809 master-0 kubenswrapper[19803]: I0313 01:17:33.734754 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7106c6fe-7c8d-45b9-bc5c-521db743663f" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:33.734809 master-0 kubenswrapper[19803]: I0313 01:17:33.734721 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") pod \"fdcd8438-d33f-490f-a841-8944c58506f8\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " Mar 13 01:17:33.735153 master-0 kubenswrapper[19803]: I0313 01:17:33.734686 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock" (OuterVolumeSpecName: "var-lock") pod "7106c6fe-7c8d-45b9-bc5c-521db743663f" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:33.735153 master-0 kubenswrapper[19803]: I0313 01:17:33.734816 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "fdcd8438-d33f-490f-a841-8944c58506f8" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:33.735153 master-0 kubenswrapper[19803]: I0313 01:17:33.734904 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") pod \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " Mar 13 01:17:33.735153 master-0 kubenswrapper[19803]: I0313 01:17:33.734966 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:33.735153 master-0 kubenswrapper[19803]: I0313 01:17:33.735066 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") pod \"fdcd8438-d33f-490f-a841-8944c58506f8\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " Mar 13 01:17:33.735938 master-0 kubenswrapper[19803]: I0313 01:17:33.735885 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock" (OuterVolumeSpecName: "var-lock") pod "fdcd8438-d33f-490f-a841-8944c58506f8" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:17:33.736752 master-0 kubenswrapper[19803]: I0313 01:17:33.736711 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:33.736752 master-0 kubenswrapper[19803]: I0313 01:17:33.736735 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:33.736752 master-0 kubenswrapper[19803]: I0313 01:17:33.736747 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/7106c6fe-7c8d-45b9-bc5c-521db743663f-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:33.736752 master-0 kubenswrapper[19803]: I0313 01:17:33.736757 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:33.737003 master-0 kubenswrapper[19803]: I0313 01:17:33.736767 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:33.737003 master-0 kubenswrapper[19803]: I0313 01:17:33.736778 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/fdcd8438-d33f-490f-a841-8944c58506f8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:17:33.742095 master-0 kubenswrapper[19803]: I0313 01:17:33.742060 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 01:17:33.761407 master-0 kubenswrapper[19803]: I0313 01:17:33.761352 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 01:17:33.791952 master-0 kubenswrapper[19803]: I0313 01:17:33.791892 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 01:17:33.799971 master-0 kubenswrapper[19803]: I0313 01:17:33.799912 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-ca-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.802032 master-0 kubenswrapper[19803]: I0313 01:17:33.801999 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 01:17:33.803250 master-0 kubenswrapper[19803]: I0313 01:17:33.803213 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-default-certificate\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.822916 master-0 kubenswrapper[19803]: I0313 01:17:33.822856 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 01:17:33.829772 master-0 kubenswrapper[19803]: I0313 01:17:33.829711 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-serving-ca\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.843263 master-0 kubenswrapper[19803]: I0313 01:17:33.843206 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 01:17:33.854345 master-0 kubenswrapper[19803]: I0313 01:17:33.854292 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalogserver-certs\" (UniqueName: \"kubernetes.io/secret/81835d51-a414-440f-889b-690561e98d6a-catalogserver-certs\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:33.861182 master-0 kubenswrapper[19803]: I0313 01:17:33.861142 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 01:17:33.863740 master-0 kubenswrapper[19803]: I0313 01:17:33.863678 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-stats-auth\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.881302 master-0 kubenswrapper[19803]: I0313 01:17:33.881248 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 01:17:33.888135 master-0 kubenswrapper[19803]: I0313 01:17:33.888078 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-serving-cert\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.900576 master-0 kubenswrapper[19803]: I0313 01:17:33.900484 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 01:17:33.920733 master-0 kubenswrapper[19803]: I0313 01:17:33.920684 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 01:17:33.942463 master-0 kubenswrapper[19803]: I0313 01:17:33.942348 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 01:17:33.950413 master-0 kubenswrapper[19803]: I0313 01:17:33.950367 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-encryption-config\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:33.960751 master-0 kubenswrapper[19803]: I0313 01:17:33.960716 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 01:17:33.963286 master-0 kubenswrapper[19803]: I0313 01:17:33.963238 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0caabde8-d49a-431d-afe5-8b283188c11c-metrics-certs\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:33.980859 master-0 kubenswrapper[19803]: I0313 01:17:33.980810 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 01:17:34.001274 master-0 kubenswrapper[19803]: I0313 01:17:34.001219 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 01:17:34.021289 master-0 kubenswrapper[19803]: I0313 01:17:34.021230 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 01:17:34.029760 master-0 kubenswrapper[19803]: I0313 01:17:34.029710 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/536a2de1-e13c-47d1-b61d-88e0a5fd2851-etcd-client\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:34.042559 master-0 kubenswrapper[19803]: I0313 01:17:34.042491 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 01:17:34.047898 master-0 kubenswrapper[19803]: I0313 01:17:34.047845 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-trusted-ca-bundle\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:34.062231 master-0 kubenswrapper[19803]: I0313 01:17:34.062019 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 01:17:34.069530 master-0 kubenswrapper[19803]: I0313 01:17:34.069461 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/536a2de1-e13c-47d1-b61d-88e0a5fd2851-audit-policies\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:34.082188 master-0 kubenswrapper[19803]: I0313 01:17:34.082097 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 01:17:34.101351 master-0 kubenswrapper[19803]: I0313 01:17:34.101273 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 01:17:34.130208 master-0 kubenswrapper[19803]: I0313 01:17:34.130098 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 01:17:34.133331 master-0 kubenswrapper[19803]: I0313 01:17:34.133254 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-ca-certs\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:34.154284 master-0 kubenswrapper[19803]: I0313 01:17:34.154222 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:34.158301 master-0 kubenswrapper[19803]: I0313 01:17:34.158267 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2jgj\" (UniqueName: \"kubernetes.io/projected/d163333f-fda5-4067-ad7c-6f646ae411c8-kube-api-access-v2jgj\") pod \"csi-snapshot-controller-operator-5685fbc7d-478l8\" (UID: \"d163333f-fda5-4067-ad7c-6f646ae411c8\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-operator-5685fbc7d-478l8" Mar 13 01:17:34.179123 master-0 kubenswrapper[19803]: I0313 01:17:34.179055 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8l9r\" (UniqueName: \"kubernetes.io/projected/6fd82994-f4d4-49e9-8742-07e206322e76-kube-api-access-k8l9r\") pod \"openshift-config-operator-64488f9d78-trr9r\" (UID: \"6fd82994-f4d4-49e9-8742-07e206322e76\") " pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:34.194037 master-0 kubenswrapper[19803]: I0313 01:17:34.193949 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztdc9\" (UniqueName: \"kubernetes.io/projected/b5757329-8692-4719-b3c7-b5df78110fcf-kube-api-access-ztdc9\") pod \"authentication-operator-7c6989d6c4-plhx7\" (UID: \"b5757329-8692-4719-b3c7-b5df78110fcf\") " pod="openshift-authentication-operator/authentication-operator-7c6989d6c4-plhx7" Mar 13 01:17:34.214421 master-0 kubenswrapper[19803]: I0313 01:17:34.213609 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wds6q\" (UniqueName: \"kubernetes.io/projected/95c7493b-ad9d-490e-83f3-aa28750b2b5e-kube-api-access-wds6q\") pod \"dns-default-pfsjd\" (UID: \"95c7493b-ad9d-490e-83f3-aa28750b2b5e\") " pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:34.235203 master-0 kubenswrapper[19803]: I0313 01:17:34.235090 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qmr7z\" (UniqueName: \"kubernetes.io/projected/f771149b-9d62-408e-be6f-72f575b1ec42-kube-api-access-qmr7z\") pod \"migrator-57ccdf9b5-5zsh9\" (UID: \"f771149b-9d62-408e-be6f-72f575b1ec42\") " pod="openshift-kube-storage-version-migrator/migrator-57ccdf9b5-5zsh9" Mar 13 01:17:34.266960 master-0 kubenswrapper[19803]: I0313 01:17:34.266897 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lz8ww\" (UniqueName: \"kubernetes.io/projected/be2913a0-453b-4b24-ab2c-b8ef2ad3ac16-kube-api-access-lz8ww\") pod \"apiserver-7dbfb86fbb-mc7xz\" (UID: \"be2913a0-453b-4b24-ab2c-b8ef2ad3ac16\") " pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:34.279448 master-0 kubenswrapper[19803]: I0313 01:17:34.279411 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98t5h\" (UniqueName: \"kubernetes.io/projected/53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59-kube-api-access-98t5h\") pod \"package-server-manager-854648ff6d-pj26h\" (UID: \"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59\") " pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:17:34.294141 master-0 kubenswrapper[19803]: I0313 01:17:34.294086 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhk76\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-kube-api-access-fhk76\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:34.299612 master-0 kubenswrapper[19803]: I0313 01:17:34.299556 19803 request.go:700] Waited for 1.000588811s due to client-side throttling, not priority and fairness, request: POST:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-network-operator/serviceaccounts/cluster-network-operator/token Mar 13 01:17:34.316915 master-0 kubenswrapper[19803]: I0313 01:17:34.316865 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5xmqc\" (UniqueName: \"kubernetes.io/projected/dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc-kube-api-access-5xmqc\") pod \"network-operator-7c649bf6d4-4zrk7\" (UID: \"dc85ce91-b9de-4e9f-a1f7-12ce9887b1dc\") " pod="openshift-network-operator/network-operator-7c649bf6d4-4zrk7" Mar 13 01:17:34.324157 master-0 kubenswrapper[19803]: I0313 01:17:34.324101 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f77c8e18b751d90bc0dfe2d4e304050" path="/var/lib/kubelet/pods/5f77c8e18b751d90bc0dfe2d4e304050/volumes" Mar 13 01:17:34.324632 master-0 kubenswrapper[19803]: I0313 01:17:34.324606 19803 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 13 01:17:34.333476 master-0 kubenswrapper[19803]: I0313 01:17:34.333427 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlgsr\" (UniqueName: \"kubernetes.io/projected/34889110-f282-4c2c-a2b0-620033559e1b-kube-api-access-tlgsr\") pod \"network-check-target-49pfj\" (UID: \"34889110-f282-4c2c-a2b0-620033559e1b\") " pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:17:34.360071 master-0 kubenswrapper[19803]: I0313 01:17:34.359998 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-845hm\" (UniqueName: \"kubernetes.io/projected/b74de987-7962-425e-9447-24b285eb888f-kube-api-access-845hm\") pod \"tuned-p9mnd\" (UID: \"b74de987-7962-425e-9447-24b285eb888f\") " pod="openshift-cluster-node-tuning-operator/tuned-p9mnd" Mar 13 01:17:34.373071 master-0 kubenswrapper[19803]: I0313 01:17:34.373019 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n58nf\" (UniqueName: \"kubernetes.io/projected/49a28ab7-1176-4213-b037-19fe18bbe57b-kube-api-access-n58nf\") pod \"ovnkube-node-nlhbx\" (UID: \"49a28ab7-1176-4213-b037-19fe18bbe57b\") " pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:34.398111 master-0 kubenswrapper[19803]: I0313 01:17:34.398048 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4b8jr\" (UniqueName: \"kubernetes.io/projected/7d874a21-43aa-4d81-b904-853fb3da5a94-kube-api-access-4b8jr\") pod \"dns-operator-589895fbb7-wb6qq\" (UID: \"7d874a21-43aa-4d81-b904-853fb3da5a94\") " pod="openshift-dns-operator/dns-operator-589895fbb7-wb6qq" Mar 13 01:17:34.407357 master-0 kubenswrapper[19803]: E0313 01:17:34.407312 19803 secret.go:189] Couldn't get secret openshift-cluster-samples-operator/samples-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.407475 master-0 kubenswrapper[19803]: E0313 01:17:34.407414 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls podName:778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.907391471 +0000 UTC m=+2.872539150 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "samples-operator-tls" (UniqueName: "kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls") pod "cluster-samples-operator-664cb58b85-mcfmg" (UID: "778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.407784 master-0 kubenswrapper[19803]: E0313 01:17:34.407758 19803 configmap.go:193] Couldn't get configMap openshift-machine-api/baremetal-kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.407861 master-0 kubenswrapper[19803]: E0313 01:17:34.407812 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config podName:21110b48-25fc-434a-b156-7f6bd6064bed nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.907802191 +0000 UTC m=+2.872949870 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config") pod "cluster-baremetal-operator-5cdb4c5598-5dvnt" (UID: "21110b48-25fc-434a-b156-7f6bd6064bed") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.409063 master-0 kubenswrapper[19803]: E0313 01:17:34.409040 19803 secret.go:189] Couldn't get secret openshift-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409137 master-0 kubenswrapper[19803]: E0313 01:17:34.409095 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert podName:d477d4b0-8b36-4ff9-9b56-0e67709b1aa7 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909085992 +0000 UTC m=+2.874233661 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert") pod "controller-manager-7f46d696f9-s9d6s" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409137 master-0 kubenswrapper[19803]: E0313 01:17:34.409128 19803 configmap.go:193] Couldn't get configMap openshift-cluster-version/openshift-service-ca.crt: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.409207 master-0 kubenswrapper[19803]: E0313 01:17:34.409154 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca podName:b3bf9dde-ca5b-46b8-883c-51e88ddf52e1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909147763 +0000 UTC m=+2.874295442 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca" (UniqueName: "kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca") pod "cluster-version-operator-8c9c967c7-jzj9v" (UID: "b3bf9dde-ca5b-46b8-883c-51e88ddf52e1") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.409207 master-0 kubenswrapper[19803]: E0313 01:17:34.409144 19803 secret.go:189] Couldn't get secret openshift-insights/openshift-insights-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409207 master-0 kubenswrapper[19803]: E0313 01:17:34.409174 19803 secret.go:189] Couldn't get secret openshift-cluster-machine-approver/machine-approver-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409207 master-0 kubenswrapper[19803]: E0313 01:17:34.409206 19803 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy-cluster-autoscaler-operator: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.409321 master-0 kubenswrapper[19803]: E0313 01:17:34.409244 19803 secret.go:189] Couldn't get secret openshift-route-controller-manager/serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409321 master-0 kubenswrapper[19803]: E0313 01:17:34.409189 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert podName:6e799871-735a-44e8-8193-24c5bb388928 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909183144 +0000 UTC m=+2.874330823 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert") pod "insights-operator-8f89dfddd-hn4jh" (UID: "6e799871-735a-44e8-8193-24c5bb388928") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409321 master-0 kubenswrapper[19803]: E0313 01:17:34.409291 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config podName:2581e5b5-8cbb-4fa5-9888-98fb572a6232 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909283907 +0000 UTC m=+2.874431586 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config") pod "cluster-autoscaler-operator-69576476f7-lrmx9" (UID: "2581e5b5-8cbb-4fa5-9888-98fb572a6232") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.409321 master-0 kubenswrapper[19803]: E0313 01:17:34.409302 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert podName:581ff17d-f121-4ece-8e45-81f1f710d163 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909297827 +0000 UTC m=+2.874445506 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert") pod "route-controller-manager-6cc78fd984-g55t4" (UID: "581ff17d-f121-4ece-8e45-81f1f710d163") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409445 master-0 kubenswrapper[19803]: E0313 01:17:34.409322 19803 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409445 master-0 kubenswrapper[19803]: E0313 01:17:34.409329 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls podName:7e938267-de1f-46f7-bf78-b0b3e810c4fa nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909310157 +0000 UTC m=+2.874457836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-approver-tls" (UniqueName: "kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls") pod "machine-approver-754bdc9f9d-cp77c" (UID: "7e938267-de1f-46f7-bf78-b0b3e810c4fa") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409445 master-0 kubenswrapper[19803]: E0313 01:17:34.409362 19803 configmap.go:193] Couldn't get configMap openshift-machine-api/cluster-baremetal-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.409445 master-0 kubenswrapper[19803]: E0313 01:17:34.409381 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert podName:ca06fac5-6707-4521-88ce-1768fede42c2 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909354258 +0000 UTC m=+2.874502167 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert") pod "packageserver-7877bc66f6-sf5t2" (UID: "ca06fac5-6707-4521-88ce-1768fede42c2") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409445 master-0 kubenswrapper[19803]: E0313 01:17:34.409399 19803 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.409445 master-0 kubenswrapper[19803]: E0313 01:17:34.409418 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images podName:21110b48-25fc-434a-b156-7f6bd6064bed nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.909405629 +0000 UTC m=+2.874553548 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images") pod "cluster-baremetal-operator-5cdb4c5598-5dvnt" (UID: "21110b48-25fc-434a-b156-7f6bd6064bed") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.409445 master-0 kubenswrapper[19803]: E0313 01:17:34.409446 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls podName:80eb89dc-ccfc-4360-811a-82a3ef6f7b65 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.90943384 +0000 UTC m=+2.874581769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" (UID: "80eb89dc-ccfc-4360-811a-82a3ef6f7b65") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.410598 master-0 kubenswrapper[19803]: E0313 01:17:34.410567 19803 secret.go:189] Couldn't get secret openshift-machine-api/machine-api-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.410598 master-0 kubenswrapper[19803]: E0313 01:17:34.410587 19803 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.410679 master-0 kubenswrapper[19803]: E0313 01:17:34.410640 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls podName:2760a216-fd4b-46d9-a4ec-2d3285ec02bd nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.910625819 +0000 UTC m=+2.875773498 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "machine-api-operator-tls" (UniqueName: "kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls") pod "machine-api-operator-84bf6db4f9-rpjkb" (UID: "2760a216-fd4b-46d9-a4ec-2d3285ec02bd") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.410679 master-0 kubenswrapper[19803]: E0313 01:17:34.410654 19803 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.410679 master-0 kubenswrapper[19803]: E0313 01:17:34.410666 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls podName:dbcb4b80-425a-4dd5-93a8-bb462f641ef1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.91065509 +0000 UTC m=+2.875802769 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls") pod "machine-config-operator-fdb5c78b5-fr2dk" (UID: "dbcb4b80-425a-4dd5-93a8-bb462f641ef1") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.410766 master-0 kubenswrapper[19803]: E0313 01:17:34.410694 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert podName:ca06fac5-6707-4521-88ce-1768fede42c2 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.91068149 +0000 UTC m=+2.875829159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert") pod "packageserver-7877bc66f6-sf5t2" (UID: "ca06fac5-6707-4521-88ce-1768fede42c2") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.411873 master-0 kubenswrapper[19803]: E0313 01:17:34.411849 19803 configmap.go:193] Couldn't get configMap openshift-insights/trusted-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.411952 master-0 kubenswrapper[19803]: E0313 01:17:34.411880 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.411952 master-0 kubenswrapper[19803]: E0313 01:17:34.411894 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle podName:6e799871-735a-44e8-8193-24c5bb388928 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.91188559 +0000 UTC m=+2.877033269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle") pod "insights-operator-8f89dfddd-hn4jh" (UID: "6e799871-735a-44e8-8193-24c5bb388928") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.411952 master-0 kubenswrapper[19803]: E0313 01:17:34.411943 19803 configmap.go:193] Couldn't get configMap openshift-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412045 master-0 kubenswrapper[19803]: E0313 01:17:34.411969 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images podName:dbcb4b80-425a-4dd5-93a8-bb462f641ef1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.911950811 +0000 UTC m=+2.877098490 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images") pod "machine-config-operator-fdb5c78b5-fr2dk" (UID: "dbcb4b80-425a-4dd5-93a8-bb462f641ef1") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412045 master-0 kubenswrapper[19803]: E0313 01:17:34.411988 19803 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412045 master-0 kubenswrapper[19803]: E0313 01:17:34.411996 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca podName:d477d4b0-8b36-4ff9-9b56-0e67709b1aa7 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.911984022 +0000 UTC m=+2.877131931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca") pod "controller-manager-7f46d696f9-s9d6s" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412045 master-0 kubenswrapper[19803]: E0313 01:17:34.412017 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images podName:80eb89dc-ccfc-4360-811a-82a3ef6f7b65 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.912008532 +0000 UTC m=+2.877156211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" (UID: "80eb89dc-ccfc-4360-811a-82a3ef6f7b65") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412045 master-0 kubenswrapper[19803]: E0313 01:17:34.412026 19803 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.412045 master-0 kubenswrapper[19803]: E0313 01:17:34.412043 19803 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412073 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls podName:21110b48-25fc-434a-b156-7f6bd6064bed nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.912063184 +0000 UTC m=+2.877211103 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-baremetal-operator-tls" (UniqueName: "kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls") pod "cluster-baremetal-operator-5cdb4c5598-5dvnt" (UID: "21110b48-25fc-434a-b156-7f6bd6064bed") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412076 19803 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412099 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config podName:581ff17d-f121-4ece-8e45-81f1f710d163 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.912088654 +0000 UTC m=+2.877236563 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config") pod "route-controller-manager-6cc78fd984-g55t4" (UID: "581ff17d-f121-4ece-8e45-81f1f710d163") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412117 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls podName:3418d0fb-d0ae-4634-a645-dc387a19147f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.912109305 +0000 UTC m=+2.877256984 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls") pod "machine-config-daemon-fprhw" (UID: "3418d0fb-d0ae-4634-a645-dc387a19147f") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412126 19803 secret.go:189] Couldn't get secret openshift-cluster-version/cluster-version-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412168 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert podName:b3bf9dde-ca5b-46b8-883c-51e88ddf52e1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.912159286 +0000 UTC m=+2.877307195 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert") pod "cluster-version-operator-8c9c967c7-jzj9v" (UID: "b3bf9dde-ca5b-46b8-883c-51e88ddf52e1") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412208 19803 configmap.go:193] Couldn't get configMap openshift-cloud-credential-operator/cco-trusted-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412239 master-0 kubenswrapper[19803]: E0313 01:17:34.412249 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca podName:65dd1dc7-1b90-40f6-82c9-dee90a1fa852 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.912233948 +0000 UTC m=+2.877381867 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cco-trusted-ca" (UniqueName: "kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca") pod "cloud-credential-operator-55d85b7b47-b4w7s" (UID: "65dd1dc7-1b90-40f6-82c9-dee90a1fa852") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412606 master-0 kubenswrapper[19803]: E0313 01:17:34.412588 19803 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/machine-approver-config: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.412652 master-0 kubenswrapper[19803]: E0313 01:17:34.412626 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config podName:7e938267-de1f-46f7-bf78-b0b3e810c4fa nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.912617477 +0000 UTC m=+2.877765156 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config") pod "machine-approver-754bdc9f9d-cp77c" (UID: "7e938267-de1f-46f7-bf78-b0b3e810c4fa") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.413249 master-0 kubenswrapper[19803]: I0313 01:17:34.413209 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2dlx5\" (UniqueName: \"kubernetes.io/projected/f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd-kube-api-access-2dlx5\") pod \"multus-additional-cni-plugins-mjh5s\" (UID: \"f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd\") " pod="openshift-multus/multus-additional-cni-plugins-mjh5s" Mar 13 01:17:34.413400 master-0 kubenswrapper[19803]: E0313 01:17:34.413370 19803 secret.go:189] Couldn't get secret openshift-machine-api/control-plane-machine-set-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.413451 master-0 kubenswrapper[19803]: E0313 01:17:34.413426 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls podName:56e20b21-ba17-46ae-a740-0e7bd45eae5f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.913414756 +0000 UTC m=+2.878562615 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "control-plane-machine-set-operator-tls" (UniqueName: "kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls") pod "control-plane-machine-set-operator-6686554ddc-pmrq6" (UID: "56e20b21-ba17-46ae-a740-0e7bd45eae5f") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.413484 master-0 kubenswrapper[19803]: E0313 01:17:34.413423 19803 configmap.go:193] Couldn't get configMap openshift-machine-api/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.413484 master-0 kubenswrapper[19803]: E0313 01:17:34.413457 19803 configmap.go:193] Couldn't get configMap openshift-controller-manager/openshift-global-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.413552 master-0 kubenswrapper[19803]: E0313 01:17:34.413426 19803 secret.go:189] Couldn't get secret openshift-machine-api/cluster-autoscaler-operator-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.413588 master-0 kubenswrapper[19803]: E0313 01:17:34.413564 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config podName:2760a216-fd4b-46d9-a4ec-2d3285ec02bd nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.913537959 +0000 UTC m=+2.878685648 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config") pod "machine-api-operator-84bf6db4f9-rpjkb" (UID: "2760a216-fd4b-46d9-a4ec-2d3285ec02bd") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.413629 master-0 kubenswrapper[19803]: E0313 01:17:34.413589 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert podName:2581e5b5-8cbb-4fa5-9888-98fb572a6232 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.9135744 +0000 UTC m=+2.878722189 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert") pod "cluster-autoscaler-operator-69576476f7-lrmx9" (UID: "2581e5b5-8cbb-4fa5-9888-98fb572a6232") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.413629 master-0 kubenswrapper[19803]: E0313 01:17:34.413614 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles podName:d477d4b0-8b36-4ff9-9b56-0e67709b1aa7 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.913604861 +0000 UTC m=+2.878752800 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-ca-bundles" (UniqueName: "kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles") pod "controller-manager-7f46d696f9-s9d6s" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.414836 master-0 kubenswrapper[19803]: E0313 01:17:34.414807 19803 secret.go:189] Couldn't get secret openshift-cluster-storage-operator/cluster-storage-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.414898 master-0 kubenswrapper[19803]: E0313 01:17:34.414857 19803 secret.go:189] Couldn't get secret openshift-machine-api/cluster-baremetal-webhook-server-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.414898 master-0 kubenswrapper[19803]: E0313 01:17:34.414874 19803 configmap.go:193] Couldn't get configMap openshift-insights/service-ca-bundle: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.414972 master-0 kubenswrapper[19803]: E0313 01:17:34.414856 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert podName:65ef9aae-25a5-46c6-adf3-634f8f7a29bc nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.914846101 +0000 UTC m=+2.879993780 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cluster-storage-operator-serving-cert" (UniqueName: "kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert") pod "cluster-storage-operator-6fbfc8dc8f-h9mwm" (UID: "65ef9aae-25a5-46c6-adf3-634f8f7a29bc") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.414972 master-0 kubenswrapper[19803]: E0313 01:17:34.414958 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert podName:21110b48-25fc-434a-b156-7f6bd6064bed nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.914935903 +0000 UTC m=+2.880083602 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert") pod "cluster-baremetal-operator-5cdb4c5598-5dvnt" (UID: "21110b48-25fc-434a-b156-7f6bd6064bed") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.415031 master-0 kubenswrapper[19803]: E0313 01:17:34.414984 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle podName:6e799871-735a-44e8-8193-24c5bb388928 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.914973944 +0000 UTC m=+2.880121633 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "service-ca-bundle" (UniqueName: "kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle") pod "insights-operator-8f89dfddd-hn4jh" (UID: "6e799871-735a-44e8-8193-24c5bb388928") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.417214 master-0 kubenswrapper[19803]: E0313 01:17:34.417179 19803 configmap.go:193] Couldn't get configMap openshift-route-controller-manager/client-ca: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.417277 master-0 kubenswrapper[19803]: E0313 01:17:34.417231 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca podName:581ff17d-f121-4ece-8e45-81f1f710d163 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.917211338 +0000 UTC m=+2.882359017 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "client-ca" (UniqueName: "kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca") pod "route-controller-manager-6cc78fd984-g55t4" (UID: "581ff17d-f121-4ece-8e45-81f1f710d163") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.417277 master-0 kubenswrapper[19803]: E0313 01:17:34.417248 19803 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.417277 master-0 kubenswrapper[19803]: E0313 01:17:34.417260 19803 configmap.go:193] Couldn't get configMap openshift-cluster-machine-approver/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.417363 master-0 kubenswrapper[19803]: E0313 01:17:34.417288 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config podName:7e938267-de1f-46f7-bf78-b0b3e810c4fa nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.91728222 +0000 UTC m=+2.882429899 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config") pod "machine-approver-754bdc9f9d-cp77c" (UID: "7e938267-de1f-46f7-bf78-b0b3e810c4fa") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.417363 master-0 kubenswrapper[19803]: E0313 01:17:34.417311 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config podName:80eb89dc-ccfc-4360-811a-82a3ef6f7b65 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.91729527 +0000 UTC m=+2.882443159 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" (UID: "80eb89dc-ccfc-4360-811a-82a3ef6f7b65") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.417969 master-0 kubenswrapper[19803]: E0313 01:17:34.417926 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.418052 master-0 kubenswrapper[19803]: E0313 01:17:34.418023 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config podName:c55a215a-9a95-4f48-8668-9b76503c3044 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.918007627 +0000 UTC m=+2.883155486 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-nnjxp" (UID: "c55a215a-9a95-4f48-8668-9b76503c3044") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.418800 master-0 kubenswrapper[19803]: E0313 01:17:34.418774 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.418857 master-0 kubenswrapper[19803]: E0313 01:17:34.418845 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config podName:3418d0fb-d0ae-4634-a645-dc387a19147f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.918827087 +0000 UTC m=+2.883974776 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config") pod "machine-config-daemon-fprhw" (UID: "3418d0fb-d0ae-4634-a645-dc387a19147f") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.423088 master-0 kubenswrapper[19803]: E0313 01:17:34.423055 19803 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-admission-webhook-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.423216 master-0 kubenswrapper[19803]: E0313 01:17:34.423087 19803 configmap.go:193] Couldn't get configMap openshift-controller-manager/config: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.423216 master-0 kubenswrapper[19803]: E0313 01:17:34.423124 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates podName:0ff72b58-aca9-46f1-86ca-da8339734ac9 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.923110551 +0000 UTC m=+2.888258240 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-certificates" (UniqueName: "kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates") pod "prometheus-operator-admission-webhook-8464df8497-rhk4l" (UID: "0ff72b58-aca9-46f1-86ca-da8339734ac9") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.423216 master-0 kubenswrapper[19803]: E0313 01:17:34.423153 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config podName:d477d4b0-8b36-4ff9-9b56-0e67709b1aa7 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.923140982 +0000 UTC m=+2.888288931 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config") pod "controller-manager-7f46d696f9-s9d6s" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.424267 master-0 kubenswrapper[19803]: E0313 01:17:34.424248 19803 secret.go:189] Couldn't get secret openshift-cloud-credential-operator/cloud-credential-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.424343 master-0 kubenswrapper[19803]: E0313 01:17:34.424303 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert podName:65dd1dc7-1b90-40f6-82c9-dee90a1fa852 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.92428925 +0000 UTC m=+2.889436929 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cloud-credential-operator-serving-cert" (UniqueName: "kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert") pod "cloud-credential-operator-55d85b7b47-b4w7s" (UID: "65dd1dc7-1b90-40f6-82c9-dee90a1fa852") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.424410 master-0 kubenswrapper[19803]: E0313 01:17:34.424381 19803 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.424460 master-0 kubenswrapper[19803]: E0313 01:17:34.424440 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls podName:c55a215a-9a95-4f48-8668-9b76503c3044 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.924427413 +0000 UTC m=+2.889575102 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls") pod "machine-config-controller-ff46b7bdf-nnjxp" (UID: "c55a215a-9a95-4f48-8668-9b76503c3044") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:34.425068 master-0 kubenswrapper[19803]: E0313 01:17:34.425048 19803 configmap.go:193] Couldn't get configMap openshift-machine-api/machine-api-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.425116 master-0 kubenswrapper[19803]: E0313 01:17:34.425066 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.425116 master-0 kubenswrapper[19803]: E0313 01:17:34.425099 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images podName:2760a216-fd4b-46d9-a4ec-2d3285ec02bd nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.925089368 +0000 UTC m=+2.890237047 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images") pod "machine-api-operator-84bf6db4f9-rpjkb" (UID: "2760a216-fd4b-46d9-a4ec-2d3285ec02bd") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.425177 master-0 kubenswrapper[19803]: E0313 01:17:34.425122 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config podName:dbcb4b80-425a-4dd5-93a8-bb462f641ef1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:34.925108689 +0000 UTC m=+2.890256378 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-fr2dk" (UID: "dbcb4b80-425a-4dd5-93a8-bb462f641ef1") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:34.434780 master-0 kubenswrapper[19803]: I0313 01:17:34.434738 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmnh2\" (UniqueName: \"kubernetes.io/projected/d89b5d71-5522-433e-a0bb-f2767332e744-kube-api-access-lmnh2\") pod \"service-ca-84bfdbbb7f-n9vpf\" (UID: \"d89b5d71-5522-433e-a0bb-f2767332e744\") " pod="openshift-service-ca/service-ca-84bfdbbb7f-n9vpf" Mar 13 01:17:34.453178 master-0 kubenswrapper[19803]: I0313 01:17:34.453067 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqfj5\" (UniqueName: \"kubernetes.io/projected/23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b-kube-api-access-pqfj5\") pod \"openshift-apiserver-operator-799b6db4d7-6bvjn\" (UID: \"23fbcbe2-60e1-46ef-9eb1-1c996ba5fa5b\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-799b6db4d7-6bvjn" Mar 13 01:17:34.476104 master-0 kubenswrapper[19803]: I0313 01:17:34.476054 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4bzs5\" (UniqueName: \"kubernetes.io/projected/31f19d97-50f9-4486-a8f9-df61ef2b0528-kube-api-access-4bzs5\") pod \"olm-operator-d64cfc9db-r4gzg\" (UID: \"31f19d97-50f9-4486-a8f9-df61ef2b0528\") " pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:17:34.501312 master-0 kubenswrapper[19803]: I0313 01:17:34.501252 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj7cp\" (UniqueName: \"kubernetes.io/projected/9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d-kube-api-access-pj7cp\") pod \"network-metrics-daemon-9hwz9\" (UID: \"9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d\") " pod="openshift-multus/network-metrics-daemon-9hwz9" Mar 13 01:17:34.523352 master-0 kubenswrapper[19803]: I0313 01:17:34.523300 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fde89b0b-7133-4b97-9e35-51c0382bd366-kube-api-access\") pod \"kube-apiserver-operator-68bd585b-g8gj5\" (UID: \"fde89b0b-7133-4b97-9e35-51c0382bd366\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-68bd585b-g8gj5" Mar 13 01:17:34.536305 master-0 kubenswrapper[19803]: I0313 01:17:34.536246 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzxv5\" (UniqueName: \"kubernetes.io/projected/69da0e58-2ae6-4d4b-b125-77e93df3d660-kube-api-access-pzxv5\") pod \"iptables-alerter-mkkgg\" (UID: \"69da0e58-2ae6-4d4b-b125-77e93df3d660\") " pod="openshift-network-operator/iptables-alerter-mkkgg" Mar 13 01:17:34.559836 master-0 kubenswrapper[19803]: I0313 01:17:34.559775 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vccjz\" (UniqueName: \"kubernetes.io/projected/0caabde8-d49a-431d-afe5-8b283188c11c-kube-api-access-vccjz\") pod \"router-default-79f8cd6fdd-kzq6q\" (UID: \"0caabde8-d49a-431d-afe5-8b283188c11c\") " pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:34.577836 master-0 kubenswrapper[19803]: I0313 01:17:34.577766 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkzq\" (UniqueName: \"kubernetes.io/projected/74efa52b-fd97-418a-9a44-914442633f74-kube-api-access-8jkzq\") pod \"openshift-controller-manager-operator-8565d84698-7rhdg\" (UID: \"74efa52b-fd97-418a-9a44-914442633f74\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-8565d84698-7rhdg" Mar 13 01:17:34.581129 master-0 kubenswrapper[19803]: I0313 01:17:34.581094 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 01:17:34.619843 master-0 kubenswrapper[19803]: I0313 01:17:34.619763 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:34.619843 master-0 kubenswrapper[19803]: I0313 01:17:34.619787 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:34.620092 master-0 kubenswrapper[19803]: I0313 01:17:34.619860 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:34.623030 master-0 kubenswrapper[19803]: I0313 01:17:34.622189 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-txxbg\" (UniqueName: \"kubernetes.io/projected/c687237e-50e5-405d-8fef-0efbc3866630-kube-api-access-txxbg\") pod \"network-node-identity-mcps9\" (UID: \"c687237e-50e5-405d-8fef-0efbc3866630\") " pod="openshift-network-node-identity/network-node-identity-mcps9" Mar 13 01:17:34.636117 master-0 kubenswrapper[19803]: I0313 01:17:34.636031 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zpdjh\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-kube-api-access-zpdjh\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:34.659282 master-0 kubenswrapper[19803]: I0313 01:17:34.659206 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rg4g\" (UniqueName: \"kubernetes.io/projected/96b67a99-eada-44d7-93eb-cc3ced777fc6-kube-api-access-4rg4g\") pod \"kube-storage-version-migrator-operator-7f65c457f5-m4v8h\" (UID: \"96b67a99-eada-44d7-93eb-cc3ced777fc6\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-7f65c457f5-m4v8h" Mar 13 01:17:34.673102 master-0 kubenswrapper[19803]: I0313 01:17:34.673030 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/91fc568a-61ad-400e-a54e-21d62e51bb17-bound-sa-token\") pod \"cluster-image-registry-operator-86d6d77c7c-6vvzl\" (UID: \"91fc568a-61ad-400e-a54e-21d62e51bb17\") " pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" Mar 13 01:17:34.693858 master-0 kubenswrapper[19803]: I0313 01:17:34.693796 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sjkgv\" (UniqueName: \"kubernetes.io/projected/de46c12a-aa3e-442e-bcc4-365d05f50103-kube-api-access-sjkgv\") pod \"multus-xk75p\" (UID: \"de46c12a-aa3e-442e-bcc4-365d05f50103\") " pod="openshift-multus/multus-xk75p" Mar 13 01:17:34.718765 master-0 kubenswrapper[19803]: I0313 01:17:34.718639 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rrvhw\" (UniqueName: \"kubernetes.io/projected/8ad2a6d5-6edf-4840-89f9-47847c8dac05-kube-api-access-rrvhw\") pod \"marketplace-operator-64bf9778cb-bx29h\" (UID: \"8ad2a6d5-6edf-4840-89f9-47847c8dac05\") " pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:34.738251 master-0 kubenswrapper[19803]: I0313 01:17:34.738175 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/75a53c09-210a-4346-99b0-a632b9e0a3c9-bound-sa-token\") pod \"ingress-operator-677db989d6-p5c8r\" (UID: \"75a53c09-210a-4346-99b0-a632b9e0a3c9\") " pod="openshift-ingress-operator/ingress-operator-677db989d6-p5c8r" Mar 13 01:17:34.770689 master-0 kubenswrapper[19803]: I0313 01:17:34.770584 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2nbvg\" (UniqueName: \"kubernetes.io/projected/fbfc2caf-126e-41b9-9b31-05f7a45d8536-kube-api-access-2nbvg\") pod \"service-ca-operator-69b6fc6b88-rghrf\" (UID: \"fbfc2caf-126e-41b9-9b31-05f7a45d8536\") " pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" Mar 13 01:17:34.776080 master-0 kubenswrapper[19803]: I0313 01:17:34.776015 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"multus-admission-controller-8d675b596-ddtwn\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:17:34.798441 master-0 kubenswrapper[19803]: I0313 01:17:34.798340 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6wzz\" (UniqueName: \"kubernetes.io/projected/8c377a67-e763-4925-afae-a7f8546a369b-kube-api-access-t6wzz\") pod \"ovnkube-control-plane-66b55d57d-d6gzp\" (UID: \"8c377a67-e763-4925-afae-a7f8546a369b\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" Mar 13 01:17:34.817238 master-0 kubenswrapper[19803]: I0313 01:17:34.817169 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k5gc8\" (UniqueName: \"kubernetes.io/projected/6ad2904e-ece9-4d72-8683-c3e691e07497-kube-api-access-k5gc8\") pod \"catalog-operator-7d9c49f57b-4jttq\" (UID: \"6ad2904e-ece9-4d72-8683-c3e691e07497\") " pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:17:34.845416 master-0 kubenswrapper[19803]: I0313 01:17:34.845348 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zzqj\" (UniqueName: \"kubernetes.io/projected/0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a-kube-api-access-5zzqj\") pod \"csi-snapshot-controller-7577d6f48-bj5ld\" (UID: \"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a\") " pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" Mar 13 01:17:34.866890 master-0 kubenswrapper[19803]: I0313 01:17:34.866436 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jc8xs\" (UniqueName: \"kubernetes.io/projected/46015913-c499-49b1-a9f6-a61c6e96b13f-kube-api-access-jc8xs\") pod \"cluster-monitoring-operator-674cbfbd9d-75jj7\" (UID: \"46015913-c499-49b1-a9f6-a61c6e96b13f\") " pod="openshift-monitoring/cluster-monitoring-operator-674cbfbd9d-75jj7" Mar 13 01:17:34.878474 master-0 kubenswrapper[19803]: I0313 01:17:34.878386 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smhrl\" (UniqueName: \"kubernetes.io/projected/250a32b4-cc8d-43fa-9dd1-0a8d85a2739a-kube-api-access-smhrl\") pod \"cluster-olm-operator-77899cf6d-rzdkn\" (UID: \"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a\") " pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" Mar 13 01:17:34.899994 master-0 kubenswrapper[19803]: I0313 01:17:34.899920 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c6db75e5-efd1-4bfa-9941-0934d7621ba2-kube-api-access\") pod \"openshift-kube-scheduler-operator-5c74bfc494-8fkz8\" (UID: \"c6db75e5-efd1-4bfa-9941-0934d7621ba2\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" Mar 13 01:17:34.914548 master-0 kubenswrapper[19803]: I0313 01:17:34.914451 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b4qsk\" (UniqueName: \"kubernetes.io/projected/8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7-kube-api-access-b4qsk\") pod \"cluster-node-tuning-operator-66c7586884-wk89g\" (UID: \"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7\") " pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" Mar 13 01:17:34.939153 master-0 kubenswrapper[19803]: I0313 01:17:34.939084 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hngc8\" (UniqueName: \"kubernetes.io/projected/2ec42095-36f5-48cf-af9d-e7a60f6cb121-kube-api-access-hngc8\") pod \"network-check-source-7c67b67d47-xd626\" (UID: \"2ec42095-36f5-48cf-af9d-e7a60f6cb121\") " pod="openshift-network-diagnostics/network-check-source-7c67b67d47-xd626" Mar 13 01:17:34.958252 master-0 kubenswrapper[19803]: I0313 01:17:34.958165 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f2f0667c-90d6-4a6b-b540-9bd0ab5973ea-kube-api-access\") pod \"kube-controller-manager-operator-86d7cdfdfb-5dgb8\" (UID: \"f2f0667c-90d6-4a6b-b540-9bd0ab5973ea\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-86d7cdfdfb-5dgb8" Mar 13 01:17:34.969589 master-0 kubenswrapper[19803]: I0313 01:17:34.969384 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:34.969589 master-0 kubenswrapper[19803]: I0313 01:17:34.969460 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:34.969793 master-0 kubenswrapper[19803]: I0313 01:17:34.969634 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:34.970087 master-0 kubenswrapper[19803]: I0313 01:17:34.969945 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:34.970087 master-0 kubenswrapper[19803]: I0313 01:17:34.970042 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:34.970200 master-0 kubenswrapper[19803]: I0313 01:17:34.970118 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:34.970416 master-0 kubenswrapper[19803]: I0313 01:17:34.970342 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:34.970416 master-0 kubenswrapper[19803]: I0313 01:17:34.970377 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-rhk4l\" (UID: \"0ff72b58-aca9-46f1-86ca-da8339734ac9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:34.970554 master-0 kubenswrapper[19803]: I0313 01:17:34.970427 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:34.970554 master-0 kubenswrapper[19803]: I0313 01:17:34.970495 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:34.970554 master-0 kubenswrapper[19803]: I0313 01:17:34.970550 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:34.970660 master-0 kubenswrapper[19803]: I0313 01:17:34.970582 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:34.970660 master-0 kubenswrapper[19803]: I0313 01:17:34.970611 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:34.970762 master-0 kubenswrapper[19803]: I0313 01:17:34.970661 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:17:34.970762 master-0 kubenswrapper[19803]: I0313 01:17:34.970704 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:34.970762 master-0 kubenswrapper[19803]: I0313 01:17:34.970742 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:34.970762 master-0 kubenswrapper[19803]: I0313 01:17:34.970750 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-certificates\" (UniqueName: \"kubernetes.io/secret/0ff72b58-aca9-46f1-86ca-da8339734ac9-tls-certificates\") pod \"prometheus-operator-admission-webhook-8464df8497-rhk4l\" (UID: \"0ff72b58-aca9-46f1-86ca-da8339734ac9\") " pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:34.970903 master-0 kubenswrapper[19803]: I0313 01:17:34.970839 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:34.970903 master-0 kubenswrapper[19803]: I0313 01:17:34.970863 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:34.970903 master-0 kubenswrapper[19803]: I0313 01:17:34.970884 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:34.971402 master-0 kubenswrapper[19803]: I0313 01:17:34.971089 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:34.971402 master-0 kubenswrapper[19803]: I0313 01:17:34.971272 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:34.971402 master-0 kubenswrapper[19803]: I0313 01:17:34.971306 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:34.971402 master-0 kubenswrapper[19803]: I0313 01:17:34.971333 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:34.971402 master-0 kubenswrapper[19803]: I0313 01:17:34.971359 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:34.971854 master-0 kubenswrapper[19803]: I0313 01:17:34.971718 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:34.972068 master-0 kubenswrapper[19803]: I0313 01:17:34.971922 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:34.972068 master-0 kubenswrapper[19803]: I0313 01:17:34.971970 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:34.972068 master-0 kubenswrapper[19803]: I0313 01:17:34.971996 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:34.972068 master-0 kubenswrapper[19803]: I0313 01:17:34.972036 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:34.972404 master-0 kubenswrapper[19803]: I0313 01:17:34.972303 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:34.972544 master-0 kubenswrapper[19803]: I0313 01:17:34.972337 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:34.972896 master-0 kubenswrapper[19803]: I0313 01:17:34.972694 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:34.972896 master-0 kubenswrapper[19803]: I0313 01:17:34.972735 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:34.972896 master-0 kubenswrapper[19803]: I0313 01:17:34.972761 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:34.972896 master-0 kubenswrapper[19803]: I0313 01:17:34.972788 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:34.973206 master-0 kubenswrapper[19803]: I0313 01:17:34.973105 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:34.973206 master-0 kubenswrapper[19803]: I0313 01:17:34.973145 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:34.973206 master-0 kubenswrapper[19803]: I0313 01:17:34.973170 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:34.973544 master-0 kubenswrapper[19803]: I0313 01:17:34.973393 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:34.973666 master-0 kubenswrapper[19803]: I0313 01:17:34.973425 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:34.973984 master-0 kubenswrapper[19803]: I0313 01:17:34.973816 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:17:34.973984 master-0 kubenswrapper[19803]: I0313 01:17:34.973872 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:17:34.976927 master-0 kubenswrapper[19803]: I0313 01:17:34.976872 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fz9qf\" (UniqueName: \"kubernetes.io/projected/77e6cd9e-b6ef-491c-a5c3-60dab81fd752-kube-api-access-fz9qf\") pod \"etcd-operator-5884b9cd56-8r87t\" (UID: \"77e6cd9e-b6ef-491c-a5c3-60dab81fd752\") " pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" Mar 13 01:17:35.002333 master-0 kubenswrapper[19803]: I0313 01:17:35.002153 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 01:17:35.004297 master-0 kubenswrapper[19803]: I0313 01:17:35.004249 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/2581e5b5-8cbb-4fa5-9888-98fb572a6232-cert\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:35.022395 master-0 kubenswrapper[19803]: I0313 01:17:35.022301 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xcf2h\" (UniqueName: \"kubernetes.io/projected/bd264af8-4ced-40c4-b4f6-202bab42d0cb-kube-api-access-xcf2h\") pod \"node-resolver-xmwg6\" (UID: \"bd264af8-4ced-40c4-b4f6-202bab42d0cb\") " pod="openshift-dns/node-resolver-xmwg6" Mar 13 01:17:35.022587 master-0 kubenswrapper[19803]: I0313 01:17:35.022495 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 01:17:35.031766 master-0 kubenswrapper[19803]: I0313 01:17:35.031686 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2581e5b5-8cbb-4fa5-9888-98fb572a6232-auth-proxy-config\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:35.043273 master-0 kubenswrapper[19803]: I0313 01:17:35.043236 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 01:17:35.062329 master-0 kubenswrapper[19803]: I0313 01:17:35.062123 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 01:17:35.081233 master-0 kubenswrapper[19803]: I0313 01:17:35.081155 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-mbxd4" Mar 13 01:17:35.104371 master-0 kubenswrapper[19803]: I0313 01:17:35.104121 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wrgnw" Mar 13 01:17:35.121075 master-0 kubenswrapper[19803]: I0313 01:17:35.121018 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 01:17:35.124262 master-0 kubenswrapper[19803]: I0313 01:17:35.124197 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/56e20b21-ba17-46ae-a740-0e7bd45eae5f-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:17:35.141166 master-0 kubenswrapper[19803]: I0313 01:17:35.140862 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:17:35.141166 master-0 kubenswrapper[19803]: I0313 01:17:35.141095 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:35.166678 master-0 kubenswrapper[19803]: I0313 01:17:35.166610 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:17:35.174610 master-0 kubenswrapper[19803]: I0313 01:17:35.174564 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:35.180936 master-0 kubenswrapper[19803]: I0313 01:17:35.180910 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:17:35.201490 master-0 kubenswrapper[19803]: I0313 01:17:35.201451 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-m4df5" Mar 13 01:17:35.221572 master-0 kubenswrapper[19803]: I0313 01:17:35.221440 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:17:35.241596 master-0 kubenswrapper[19803]: I0313 01:17:35.241540 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:17:35.251489 master-0 kubenswrapper[19803]: I0313 01:17:35.251452 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:35.261736 master-0 kubenswrapper[19803]: I0313 01:17:35.261688 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:17:35.262924 master-0 kubenswrapper[19803]: I0313 01:17:35.262897 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:35.282103 master-0 kubenswrapper[19803]: I0313 01:17:35.282057 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 01:17:35.299801 master-0 kubenswrapper[19803]: I0313 01:17:35.299758 19803 request.go:700] Waited for 1.978351469s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-machine-api/configmaps?fieldSelector=metadata.name%3Dcluster-baremetal-operator-images&limit=500&resourceVersion=0 Mar 13 01:17:35.303032 master-0 kubenswrapper[19803]: I0313 01:17:35.302988 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 01:17:35.312261 master-0 kubenswrapper[19803]: I0313 01:17:35.312215 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-images\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:35.320766 master-0 kubenswrapper[19803]: I0313 01:17:35.320743 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 01:17:35.321970 master-0 kubenswrapper[19803]: I0313 01:17:35.321934 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6e799871-735a-44e8-8193-24c5bb388928-serving-cert\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:35.340557 master-0 kubenswrapper[19803]: I0313 01:17:35.340535 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 01:17:35.369972 master-0 kubenswrapper[19803]: I0313 01:17:35.369932 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 01:17:35.374530 master-0 kubenswrapper[19803]: I0313 01:17:35.374482 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-trusted-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:35.381140 master-0 kubenswrapper[19803]: I0313 01:17:35.381100 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 01:17:35.390563 master-0 kubenswrapper[19803]: I0313 01:17:35.390496 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e799871-735a-44e8-8193-24c5bb388928-service-ca-bundle\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:35.402114 master-0 kubenswrapper[19803]: I0313 01:17:35.402068 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 01:17:35.403635 master-0 kubenswrapper[19803]: I0313 01:17:35.403603 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-baremetal-operator-tls\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cluster-baremetal-operator-tls\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:35.421623 master-0 kubenswrapper[19803]: I0313 01:17:35.421552 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 01:17:35.431231 master-0 kubenswrapper[19803]: I0313 01:17:35.431167 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/21110b48-25fc-434a-b156-7f6bd6064bed-cert\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:35.441073 master-0 kubenswrapper[19803]: I0313 01:17:35.441031 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 01:17:35.441602 master-0 kubenswrapper[19803]: I0313 01:17:35.441567 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/21110b48-25fc-434a-b156-7f6bd6064bed-config\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:35.461582 master-0 kubenswrapper[19803]: I0313 01:17:35.461526 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 01:17:35.471389 master-0 kubenswrapper[19803]: I0313 01:17:35.471358 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-samples-operator-tls\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:17:35.481219 master-0 kubenswrapper[19803]: I0313 01:17:35.481189 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 01:17:35.501215 master-0 kubenswrapper[19803]: I0313 01:17:35.501152 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 01:17:35.522156 master-0 kubenswrapper[19803]: I0313 01:17:35.522103 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 01:17:35.524807 master-0 kubenswrapper[19803]: I0313 01:17:35.524766 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-storage-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-cluster-storage-operator-serving-cert\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:17:35.540415 master-0 kubenswrapper[19803]: I0313 01:17:35.540380 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 01:17:35.544097 master-0 kubenswrapper[19803]: I0313 01:17:35.544058 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-serving-cert\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:35.561275 master-0 kubenswrapper[19803]: I0313 01:17:35.561220 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 01:17:35.561968 master-0 kubenswrapper[19803]: I0313 01:17:35.561903 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-service-ca\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:35.580714 master-0 kubenswrapper[19803]: I0313 01:17:35.580671 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 01:17:35.600844 master-0 kubenswrapper[19803]: I0313 01:17:35.600803 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 01:17:35.624643 master-0 kubenswrapper[19803]: I0313 01:17:35.624586 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 01:17:35.642294 master-0 kubenswrapper[19803]: I0313 01:17:35.642230 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 01:17:35.652733 master-0 kubenswrapper[19803]: I0313 01:17:35.652660 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:35.662165 master-0 kubenswrapper[19803]: I0313 01:17:35.662087 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 01:17:35.663711 master-0 kubenswrapper[19803]: I0313 01:17:35.663663 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:35.683901 master-0 kubenswrapper[19803]: I0313 01:17:35.683846 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 01:17:35.691021 master-0 kubenswrapper[19803]: I0313 01:17:35.690982 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:35.701076 master-0 kubenswrapper[19803]: I0313 01:17:35.701022 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 01:17:35.743406 master-0 kubenswrapper[19803]: I0313 01:17:35.743251 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 01:17:35.744418 master-0 kubenswrapper[19803]: I0313 01:17:35.743700 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 01:17:35.751292 master-0 kubenswrapper[19803]: I0313 01:17:35.751255 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-credential-operator-serving-cert\" (UniqueName: \"kubernetes.io/secret/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cloud-credential-operator-serving-cert\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:35.754440 master-0 kubenswrapper[19803]: I0313 01:17:35.754360 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cco-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-cco-trusted-ca\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:35.762244 master-0 kubenswrapper[19803]: I0313 01:17:35.762192 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 01:17:35.782825 master-0 kubenswrapper[19803]: I0313 01:17:35.782771 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bbmgf" Mar 13 01:17:35.801769 master-0 kubenswrapper[19803]: I0313 01:17:35.801707 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 01:17:35.804485 master-0 kubenswrapper[19803]: I0313 01:17:35.804391 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:35.824222 master-0 kubenswrapper[19803]: I0313 01:17:35.824142 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5vwqr" Mar 13 01:17:35.841553 master-0 kubenswrapper[19803]: I0313 01:17:35.841480 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 01:17:35.851649 master-0 kubenswrapper[19803]: I0313 01:17:35.851585 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/7e938267-de1f-46f7-bf78-b0b3e810c4fa-machine-approver-tls\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:35.862042 master-0 kubenswrapper[19803]: I0313 01:17:35.861983 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 01:17:35.862501 master-0 kubenswrapper[19803]: I0313 01:17:35.862455 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-machine-api-operator-tls\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:35.882203 master-0 kubenswrapper[19803]: I0313 01:17:35.882134 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 01:17:35.901974 master-0 kubenswrapper[19803]: I0313 01:17:35.901884 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 01:17:35.922816 master-0 kubenswrapper[19803]: I0313 01:17:35.922773 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 01:17:35.931413 master-0 kubenswrapper[19803]: I0313 01:17:35.931374 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7e938267-de1f-46f7-bf78-b0b3e810c4fa-auth-proxy-config\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:35.941953 master-0 kubenswrapper[19803]: I0313 01:17:35.941922 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 01:17:35.942230 master-0 kubenswrapper[19803]: I0313 01:17:35.942209 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-images\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:35.961915 master-0 kubenswrapper[19803]: I0313 01:17:35.961867 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 01:17:35.964650 master-0 kubenswrapper[19803]: I0313 01:17:35.964535 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-config\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:35.970725 master-0 kubenswrapper[19803]: E0313 01:17:35.970684 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.970922 master-0 kubenswrapper[19803]: E0313 01:17:35.970908 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config podName:c55a215a-9a95-4f48-8668-9b76503c3044 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.970885879 +0000 UTC m=+4.936033548 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "mcc-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config") pod "machine-config-controller-ff46b7bdf-nnjxp" (UID: "c55a215a-9a95-4f48-8668-9b76503c3044") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.971017 master-0 kubenswrapper[19803]: E0313 01:17:35.970736 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.971105 master-0 kubenswrapper[19803]: E0313 01:17:35.971095 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config podName:3418d0fb-d0ae-4634-a645-dc387a19147f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.971086665 +0000 UTC m=+4.936234344 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "mcd-auth-proxy-config" (UniqueName: "kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config") pod "machine-config-daemon-fprhw" (UID: "3418d0fb-d0ae-4634-a645-dc387a19147f") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.971173 master-0 kubenswrapper[19803]: E0313 01:17:35.970775 19803 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.971265 master-0 kubenswrapper[19803]: E0313 01:17:35.971253 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config podName:80eb89dc-ccfc-4360-811a-82a3ef6f7b65 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.971245659 +0000 UTC m=+4.936393338 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config") pod "cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" (UID: "80eb89dc-ccfc-4360-811a-82a3ef6f7b65") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.971390 master-0 kubenswrapper[19803]: E0313 01:17:35.971371 19803 secret.go:189] Couldn't get secret openshift-machine-config-operator/mcc-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.971485 master-0 kubenswrapper[19803]: E0313 01:17:35.971475 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls podName:c55a215a-9a95-4f48-8668-9b76503c3044 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.971465014 +0000 UTC m=+4.936612693 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls") pod "machine-config-controller-ff46b7bdf-nnjxp" (UID: "c55a215a-9a95-4f48-8668-9b76503c3044") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.971601 master-0 kubenswrapper[19803]: E0313 01:17:35.971587 19803 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.971721 master-0 kubenswrapper[19803]: E0313 01:17:35.971681 19803 secret.go:189] Couldn't get secret openshift-cloud-controller-manager-operator/cloud-controller-manager-operator-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.971776 master-0 kubenswrapper[19803]: E0313 01:17:35.971683 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert podName:ca06fac5-6707-4521-88ce-1768fede42c2 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.971673549 +0000 UTC m=+4.936821228 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "apiservice-cert" (UniqueName: "kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert") pod "packageserver-7877bc66f6-sf5t2" (UID: "ca06fac5-6707-4521-88ce-1768fede42c2") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.971810 master-0 kubenswrapper[19803]: E0313 01:17:35.971790 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls podName:80eb89dc-ccfc-4360-811a-82a3ef6f7b65 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.971772771 +0000 UTC m=+4.936920470 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cloud-controller-manager-operator-tls" (UniqueName: "kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls") pod "cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" (UID: "80eb89dc-ccfc-4360-811a-82a3ef6f7b65") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.971869 master-0 kubenswrapper[19803]: E0313 01:17:35.971855 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/kube-rbac-proxy: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.971953 master-0 kubenswrapper[19803]: E0313 01:17:35.971943 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config podName:dbcb4b80-425a-4dd5-93a8-bb462f641ef1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.971935135 +0000 UTC m=+4.937082814 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "auth-proxy-config" (UniqueName: "kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config") pod "machine-config-operator-fdb5c78b5-fr2dk" (UID: "dbcb4b80-425a-4dd5-93a8-bb462f641ef1") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.972039 master-0 kubenswrapper[19803]: E0313 01:17:35.972012 19803 secret.go:189] Couldn't get secret openshift-machine-config-operator/mco-proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.972091 master-0 kubenswrapper[19803]: E0313 01:17:35.972027 19803 secret.go:189] Couldn't get secret openshift-operator-lifecycle-manager/packageserver-service-cert: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.972194 master-0 kubenswrapper[19803]: E0313 01:17:35.972086 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls podName:dbcb4b80-425a-4dd5-93a8-bb462f641ef1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.972067118 +0000 UTC m=+4.937214867 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls") pod "machine-config-operator-fdb5c78b5-fr2dk" (UID: "dbcb4b80-425a-4dd5-93a8-bb462f641ef1") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.972268 master-0 kubenswrapper[19803]: E0313 01:17:35.972245 19803 configmap.go:193] Couldn't get configMap openshift-machine-config-operator/machine-config-operator-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.972316 master-0 kubenswrapper[19803]: E0313 01:17:35.972295 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images podName:dbcb4b80-425a-4dd5-93a8-bb462f641ef1 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.972281813 +0000 UTC m=+4.937429512 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images") pod "machine-config-operator-fdb5c78b5-fr2dk" (UID: "dbcb4b80-425a-4dd5-93a8-bb462f641ef1") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.972356 master-0 kubenswrapper[19803]: E0313 01:17:35.972340 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert podName:ca06fac5-6707-4521-88ce-1768fede42c2 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.972330944 +0000 UTC m=+4.937478633 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert") pod "packageserver-7877bc66f6-sf5t2" (UID: "ca06fac5-6707-4521-88ce-1768fede42c2") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.973004 master-0 kubenswrapper[19803]: E0313 01:17:35.972959 19803 secret.go:189] Couldn't get secret openshift-machine-config-operator/proxy-tls: failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.973073 master-0 kubenswrapper[19803]: E0313 01:17:35.972982 19803 configmap.go:193] Couldn't get configMap openshift-cloud-controller-manager-operator/cloud-controller-manager-images: failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.973169 master-0 kubenswrapper[19803]: E0313 01:17:35.973097 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls podName:3418d0fb-d0ae-4634-a645-dc387a19147f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.973066863 +0000 UTC m=+4.938214552 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "proxy-tls" (UniqueName: "kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls") pod "machine-config-daemon-fprhw" (UID: "3418d0fb-d0ae-4634-a645-dc387a19147f") : failed to sync secret cache: timed out waiting for the condition Mar 13 01:17:35.973261 master-0 kubenswrapper[19803]: E0313 01:17:35.973249 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images podName:80eb89dc-ccfc-4360-811a-82a3ef6f7b65 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:36.973233997 +0000 UTC m=+4.938381876 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "images" (UniqueName: "kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images") pod "cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" (UID: "80eb89dc-ccfc-4360-811a-82a3ef6f7b65") : failed to sync configmap cache: timed out waiting for the condition Mar 13 01:17:35.983088 master-0 kubenswrapper[19803]: I0313 01:17:35.983044 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-zlp9s" Mar 13 01:17:36.003640 master-0 kubenswrapper[19803]: I0313 01:17:36.003550 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-24stp" Mar 13 01:17:36.021440 master-0 kubenswrapper[19803]: I0313 01:17:36.021356 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 01:17:36.042246 master-0 kubenswrapper[19803]: I0313 01:17:36.042168 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 01:17:36.061315 master-0 kubenswrapper[19803]: I0313 01:17:36.061138 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 01:17:36.081646 master-0 kubenswrapper[19803]: I0313 01:17:36.081564 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 01:17:36.103302 master-0 kubenswrapper[19803]: I0313 01:17:36.103226 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 01:17:36.127109 master-0 kubenswrapper[19803]: I0313 01:17:36.127034 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jgxk7" Mar 13 01:17:36.141565 master-0 kubenswrapper[19803]: I0313 01:17:36.141497 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 01:17:36.162216 master-0 kubenswrapper[19803]: I0313 01:17:36.162150 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 01:17:36.181320 master-0 kubenswrapper[19803]: I0313 01:17:36.181278 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-znq86" Mar 13 01:17:36.201445 master-0 kubenswrapper[19803]: I0313 01:17:36.201391 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 01:17:36.221609 master-0 kubenswrapper[19803]: I0313 01:17:36.221562 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hzxsb" Mar 13 01:17:36.241538 master-0 kubenswrapper[19803]: I0313 01:17:36.241459 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 01:17:36.260771 master-0 kubenswrapper[19803]: I0313 01:17:36.260659 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-mmsdc" Mar 13 01:17:36.281534 master-0 kubenswrapper[19803]: I0313 01:17:36.281459 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:17:36.301217 master-0 kubenswrapper[19803]: I0313 01:17:36.301151 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:17:36.320299 master-0 kubenswrapper[19803]: I0313 01:17:36.320073 19803 request.go:700] Waited for 2.990325355s due to client-side throttling, not priority and fairness, request: GET:https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-cloud-controller-manager-operator/configmaps?fieldSelector=metadata.name%3Dcloud-controller-manager-images&limit=500&resourceVersion=0 Mar 13 01:17:36.321772 master-0 kubenswrapper[19803]: I0313 01:17:36.321727 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 01:17:36.342050 master-0 kubenswrapper[19803]: I0313 01:17:36.342004 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 01:17:36.395324 master-0 kubenswrapper[19803]: I0313 01:17:36.395231 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdpt2\" (UniqueName: \"kubernetes.io/projected/3418d0fb-d0ae-4634-a645-dc387a19147f-kube-api-access-tdpt2\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:36.412259 master-0 kubenswrapper[19803]: I0313 01:17:36.412189 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-44dmt\" (UniqueName: \"kubernetes.io/projected/9863f7ff-4c8d-42a3-a822-01697cf9c920-kube-api-access-44dmt\") pod \"certified-operators-64xrl\" (UID: \"9863f7ff-4c8d-42a3-a822-01697cf9c920\") " pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:36.432964 master-0 kubenswrapper[19803]: I0313 01:17:36.432898 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgz5w\" (UniqueName: \"kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w\") pod \"route-controller-manager-6cc78fd984-g55t4\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:36.457317 master-0 kubenswrapper[19803]: I0313 01:17:36.457256 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gh7ks\" (UniqueName: \"kubernetes.io/projected/2581e5b5-8cbb-4fa5-9888-98fb572a6232-kube-api-access-gh7ks\") pod \"cluster-autoscaler-operator-69576476f7-lrmx9\" (UID: \"2581e5b5-8cbb-4fa5-9888-98fb572a6232\") " pod="openshift-machine-api/cluster-autoscaler-operator-69576476f7-lrmx9" Mar 13 01:17:36.476402 master-0 kubenswrapper[19803]: I0313 01:17:36.476338 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kvmpk\" (UniqueName: \"kubernetes.io/projected/7e938267-de1f-46f7-bf78-b0b3e810c4fa-kube-api-access-kvmpk\") pod \"machine-approver-754bdc9f9d-cp77c\" (UID: \"7e938267-de1f-46f7-bf78-b0b3e810c4fa\") " pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" Mar 13 01:17:36.493167 master-0 kubenswrapper[19803]: I0313 01:17:36.493105 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-98t7n\" (UniqueName: \"kubernetes.io/projected/778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0-kube-api-access-98t7n\") pod \"cluster-samples-operator-664cb58b85-mcfmg\" (UID: \"778abcc4-2d4d-4c43-b7d4-a24e4fdf60f0\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-664cb58b85-mcfmg" Mar 13 01:17:36.513961 master-0 kubenswrapper[19803]: I0313 01:17:36.513835 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvckz\" (UniqueName: \"kubernetes.io/projected/fb5dee36-70a4-47a4-afc2-d3209a476362-kube-api-access-mvckz\") pod \"redhat-marketplace-cx58l\" (UID: \"fb5dee36-70a4-47a4-afc2-d3209a476362\") " pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:36.539584 master-0 kubenswrapper[19803]: I0313 01:17:36.539532 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvrdt\" (UniqueName: \"kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt\") pod \"controller-manager-7f46d696f9-s9d6s\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:36.569688 master-0 kubenswrapper[19803]: I0313 01:17:36.569633 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2pt2w\" (UniqueName: \"kubernetes.io/projected/ca06fac5-6707-4521-88ce-1768fede42c2-kube-api-access-2pt2w\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:36.578965 master-0 kubenswrapper[19803]: I0313 01:17:36.578903 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/b3bf9dde-ca5b-46b8-883c-51e88ddf52e1-kube-api-access\") pod \"cluster-version-operator-8c9c967c7-jzj9v\" (UID: \"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1\") " pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" Mar 13 01:17:36.593886 master-0 kubenswrapper[19803]: I0313 01:17:36.593843 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9rtds\" (UniqueName: \"kubernetes.io/projected/6da2aac0-42a0-45c2-93ec-b148f5889e8b-kube-api-access-9rtds\") pod \"redhat-operators-d9nkp\" (UID: \"6da2aac0-42a0-45c2-93ec-b148f5889e8b\") " pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:36.612743 master-0 kubenswrapper[19803]: I0313 01:17:36.612671 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g89p7\" (UniqueName: \"kubernetes.io/projected/56e20b21-ba17-46ae-a740-0e7bd45eae5f-kube-api-access-g89p7\") pod \"control-plane-machine-set-operator-6686554ddc-pmrq6\" (UID: \"56e20b21-ba17-46ae-a740-0e7bd45eae5f\") " pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" Mar 13 01:17:36.640985 master-0 kubenswrapper[19803]: I0313 01:17:36.640935 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sd26j\" (UniqueName: \"kubernetes.io/projected/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-kube-api-access-sd26j\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:36.660256 master-0 kubenswrapper[19803]: I0313 01:17:36.660193 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt62j\" (UniqueName: \"kubernetes.io/projected/65dd1dc7-1b90-40f6-82c9-dee90a1fa852-kube-api-access-vt62j\") pod \"cloud-credential-operator-55d85b7b47-b4w7s\" (UID: \"65dd1dc7-1b90-40f6-82c9-dee90a1fa852\") " pod="openshift-cloud-credential-operator/cloud-credential-operator-55d85b7b47-b4w7s" Mar 13 01:17:36.679581 master-0 kubenswrapper[19803]: I0313 01:17:36.679482 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g8n5d\" (UniqueName: \"kubernetes.io/projected/c55a215a-9a95-4f48-8668-9b76503c3044-kube-api-access-g8n5d\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:36.701837 master-0 kubenswrapper[19803]: I0313 01:17:36.701758 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qq6v6\" (UniqueName: \"kubernetes.io/projected/9d2f93bd-e4ce-4ed2-b249-946338f753ed-kube-api-access-qq6v6\") pod \"community-operators-zglhp\" (UID: \"9d2f93bd-e4ce-4ed2-b249-946338f753ed\") " pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:36.715870 master-0 kubenswrapper[19803]: I0313 01:17:36.715806 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4lqgs\" (UniqueName: \"kubernetes.io/projected/2760a216-fd4b-46d9-a4ec-2d3285ec02bd-kube-api-access-4lqgs\") pod \"machine-api-operator-84bf6db4f9-rpjkb\" (UID: \"2760a216-fd4b-46d9-a4ec-2d3285ec02bd\") " pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" Mar 13 01:17:36.740118 master-0 kubenswrapper[19803]: I0313 01:17:36.740046 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jthxn\" (UniqueName: \"kubernetes.io/projected/6e799871-735a-44e8-8193-24c5bb388928-kube-api-access-jthxn\") pod \"insights-operator-8f89dfddd-hn4jh\" (UID: \"6e799871-735a-44e8-8193-24c5bb388928\") " pod="openshift-insights/insights-operator-8f89dfddd-hn4jh" Mar 13 01:17:36.763449 master-0 kubenswrapper[19803]: I0313 01:17:36.763391 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-psvcz\" (UniqueName: \"kubernetes.io/projected/65ef9aae-25a5-46c6-adf3-634f8f7a29bc-kube-api-access-psvcz\") pod \"cluster-storage-operator-6fbfc8dc8f-h9mwm\" (UID: \"65ef9aae-25a5-46c6-adf3-634f8f7a29bc\") " pod="openshift-cluster-storage-operator/cluster-storage-operator-6fbfc8dc8f-h9mwm" Mar 13 01:17:36.778426 master-0 kubenswrapper[19803]: I0313 01:17:36.778301 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pt5g7\" (UniqueName: \"kubernetes.io/projected/536a2de1-e13c-47d1-b61d-88e0a5fd2851-kube-api-access-pt5g7\") pod \"apiserver-c84d45cdc-rj5st\" (UID: \"536a2de1-e13c-47d1-b61d-88e0a5fd2851\") " pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:36.802913 master-0 kubenswrapper[19803]: I0313 01:17:36.802819 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9npsh\" (UniqueName: \"kubernetes.io/projected/21110b48-25fc-434a-b156-7f6bd6064bed-kube-api-access-9npsh\") pod \"cluster-baremetal-operator-5cdb4c5598-5dvnt\" (UID: \"21110b48-25fc-434a-b156-7f6bd6064bed\") " pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" Mar 13 01:17:36.814950 master-0 kubenswrapper[19803]: I0313 01:17:36.814876 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wld\" (UniqueName: \"kubernetes.io/projected/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-kube-api-access-t7wld\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:36.833529 master-0 kubenswrapper[19803]: I0313 01:17:36.833450 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbcg4\" (UniqueName: \"kubernetes.io/projected/07894508-4e56-48d4-ab3c-4ab8f4ea2e7e-kube-api-access-nbcg4\") pod \"operator-controller-controller-manager-6598bfb6c4-n4252\" (UID: \"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e\") " pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:36.857771 master-0 kubenswrapper[19803]: I0313 01:17:36.857711 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nd8dv\" (UniqueName: \"kubernetes.io/projected/81835d51-a414-440f-889b-690561e98d6a-kube-api-access-nd8dv\") pod \"catalogd-controller-manager-7f8b8b6f4c-z4qvz\" (UID: \"81835d51-a414-440f-889b-690561e98d6a\") " pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:36.874451 master-0 kubenswrapper[19803]: E0313 01:17:36.874382 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:36.874451 master-0 kubenswrapper[19803]: E0313 01:17:36.874452 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:36.874631 master-0 kubenswrapper[19803]: E0313 01:17:36.874598 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:37.374562957 +0000 UTC m=+5.339710676 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:36.902104 master-0 kubenswrapper[19803]: E0313 01:17:36.902026 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:36.902104 master-0 kubenswrapper[19803]: E0313 01:17:36.902102 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:36.902246 master-0 kubenswrapper[19803]: E0313 01:17:36.902232 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:37.402190306 +0000 UTC m=+5.367338185 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:36.915021 master-0 kubenswrapper[19803]: E0313 01:17:36.914958 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:36.915021 master-0 kubenswrapper[19803]: E0313 01:17:36.915016 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:36.915194 master-0 kubenswrapper[19803]: E0313 01:17:36.915110 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:37.415081967 +0000 UTC m=+5.380229886 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:36.931556 master-0 kubenswrapper[19803]: I0313 01:17:36.931431 19803 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 13 01:17:36.971784 master-0 kubenswrapper[19803]: I0313 01:17:36.971711 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 01:17:37.004932 master-0 kubenswrapper[19803]: I0313 01:17:37.004872 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:37.005088 master-0 kubenswrapper[19803]: I0313 01:17:37.004951 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:37.005088 master-0 kubenswrapper[19803]: I0313 01:17:37.004989 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:37.005088 master-0 kubenswrapper[19803]: I0313 01:17:37.005014 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:37.005483 master-0 kubenswrapper[19803]: I0313 01:17:37.005373 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:37.006036 master-0 kubenswrapper[19803]: I0313 01:17:37.005774 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-auth-proxy-config\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:37.006036 master-0 kubenswrapper[19803]: I0313 01:17:37.005800 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cloud-controller-manager-operator-tls\" (UniqueName: \"kubernetes.io/secret/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-cloud-controller-manager-operator-tls\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:37.006036 master-0 kubenswrapper[19803]: I0313 01:17:37.005873 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/c55a215a-9a95-4f48-8668-9b76503c3044-proxy-tls\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:37.006953 master-0 kubenswrapper[19803]: I0313 01:17:37.006001 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:37.006953 master-0 kubenswrapper[19803]: I0313 01:17:37.006312 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-apiservice-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:37.006953 master-0 kubenswrapper[19803]: I0313 01:17:37.006680 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:37.006953 master-0 kubenswrapper[19803]: I0313 01:17:37.006747 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:37.006953 master-0 kubenswrapper[19803]: I0313 01:17:37.006845 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:37.007341 master-0 kubenswrapper[19803]: I0313 01:17:37.006957 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:37.007341 master-0 kubenswrapper[19803]: I0313 01:17:37.007015 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:37.007341 master-0 kubenswrapper[19803]: I0313 01:17:37.007148 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:37.007341 master-0 kubenswrapper[19803]: I0313 01:17:37.007298 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3418d0fb-d0ae-4634-a645-dc387a19147f-mcd-auth-proxy-config\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:37.007737 master-0 kubenswrapper[19803]: I0313 01:17:37.007422 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/c55a215a-9a95-4f48-8668-9b76503c3044-mcc-auth-proxy-config\") pod \"machine-config-controller-ff46b7bdf-nnjxp\" (UID: \"c55a215a-9a95-4f48-8668-9b76503c3044\") " pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" Mar 13 01:17:37.007737 master-0 kubenswrapper[19803]: I0313 01:17:37.007723 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ca06fac5-6707-4521-88ce-1768fede42c2-webhook-cert\") pod \"packageserver-7877bc66f6-sf5t2\" (UID: \"ca06fac5-6707-4521-88ce-1768fede42c2\") " pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:37.007858 master-0 kubenswrapper[19803]: I0313 01:17:37.007824 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-auth-proxy-config\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:37.007926 master-0 kubenswrapper[19803]: I0313 01:17:37.007827 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-images\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:37.007987 master-0 kubenswrapper[19803]: I0313 01:17:37.007962 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/3418d0fb-d0ae-4634-a645-dc387a19147f-proxy-tls\") pod \"machine-config-daemon-fprhw\" (UID: \"3418d0fb-d0ae-4634-a645-dc387a19147f\") " pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:17:37.008505 master-0 kubenswrapper[19803]: I0313 01:17:37.008275 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/80eb89dc-ccfc-4360-811a-82a3ef6f7b65-images\") pod \"cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8\" (UID: \"80eb89dc-ccfc-4360-811a-82a3ef6f7b65\") " pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" Mar 13 01:17:37.008505 master-0 kubenswrapper[19803]: I0313 01:17:37.008310 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/dbcb4b80-425a-4dd5-93a8-bb462f641ef1-proxy-tls\") pod \"machine-config-operator-fdb5c78b5-fr2dk\" (UID: \"dbcb4b80-425a-4dd5-93a8-bb462f641ef1\") " pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" Mar 13 01:17:37.014259 master-0 kubenswrapper[19803]: E0313 01:17:37.014194 19803 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.7s" Mar 13 01:17:37.027208 master-0 kubenswrapper[19803]: I0313 01:17:37.027096 19803 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-controller-manager-master-0" podUID="" Mar 13 01:17:37.064039 master-0 kubenswrapper[19803]: I0313 01:17:37.063869 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" podUID="" Mar 13 01:17:37.103090 master-0 kubenswrapper[19803]: I0313 01:17:37.102969 19803 mirror_client.go:130] "Deleting a mirror pod" pod="kube-system/bootstrap-kube-scheduler-master-0" podUID="" Mar 13 01:17:37.142926 master-0 kubenswrapper[19803]: I0313 01:17:37.142840 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:37.142926 master-0 kubenswrapper[19803]: I0313 01:17:37.142933 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.142953 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.142981 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.142998 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143011 19803 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e5815d77-bfd4-459e-9678-c08ac790805d" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143040 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-scheduler-master-0"] Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143052 19803 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-scheduler-master-0" mirrorPodUID="e5815d77-bfd4-459e-9678-c08ac790805d" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143111 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143128 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143156 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143171 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143183 19803 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="397f933d-d01c-48f5-905c-aaf9a01c8b0a" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143227 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143241 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["kube-system/bootstrap-kube-controller-manager-master-0"] Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143251 19803 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="kube-system/bootstrap-kube-controller-manager-master-0" mirrorPodUID="397f933d-d01c-48f5-905c-aaf9a01c8b0a" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143273 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-7d9c49f57b-4jttq" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143287 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143298 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143307 19803 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="32b55bbd-f227-4444-94a9-28a06b9b2f01" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143334 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143360 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143373 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/bootstrap-kube-apiserver-master-0"] Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143383 19803 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/bootstrap-kube-apiserver-master-0" mirrorPodUID="32b55bbd-f227-4444-94a9-28a06b9b2f01" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143394 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143420 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143442 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-d64cfc9db-r4gzg" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143499 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143660 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143691 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143721 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143748 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:37.145280 master-0 kubenswrapper[19803]: I0313 01:17:37.143771 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:37.151891 master-0 kubenswrapper[19803]: I0313 01:17:37.151801 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:37.155497 master-0 kubenswrapper[19803]: I0313 01:17:37.155397 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:37.212258 master-0 kubenswrapper[19803]: I0313 01:17:37.212182 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:17:37.215422 master-0 kubenswrapper[19803]: I0313 01:17:37.215349 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-49pfj" Mar 13 01:17:37.249887 master-0 kubenswrapper[19803]: I0313 01:17:37.249679 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:37.320562 master-0 kubenswrapper[19803]: I0313 01:17:37.320328 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 01:17:37.322265 master-0 kubenswrapper[19803]: I0313 01:17:37.322187 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:37.342583 master-0 kubenswrapper[19803]: I0313 01:17:37.342459 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 01:17:37.417205 master-0 kubenswrapper[19803]: I0313 01:17:37.417120 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:37.417205 master-0 kubenswrapper[19803]: I0313 01:17:37.417197 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:37.417685 master-0 kubenswrapper[19803]: I0313 01:17:37.417456 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:37.417685 master-0 kubenswrapper[19803]: E0313 01:17:37.417497 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:37.417685 master-0 kubenswrapper[19803]: E0313 01:17:37.417612 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:37.417976 master-0 kubenswrapper[19803]: E0313 01:17:37.417718 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:38.417676368 +0000 UTC m=+6.382824087 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:37.417976 master-0 kubenswrapper[19803]: E0313 01:17:37.417777 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:37.417976 master-0 kubenswrapper[19803]: E0313 01:17:37.417815 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:37.417976 master-0 kubenswrapper[19803]: E0313 01:17:37.417919 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:38.417860492 +0000 UTC m=+6.383008391 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:37.418578 master-0 kubenswrapper[19803]: E0313 01:17:37.418503 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:37.418578 master-0 kubenswrapper[19803]: E0313 01:17:37.418576 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:37.418804 master-0 kubenswrapper[19803]: E0313 01:17:37.418658 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:38.41863686 +0000 UTC m=+6.383784579 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:37.435163 master-0 kubenswrapper[19803]: I0313 01:17:37.435090 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:37.436866 master-0 kubenswrapper[19803]: I0313 01:17:37.436838 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-pfsjd" Mar 13 01:17:37.558631 master-0 kubenswrapper[19803]: I0313 01:17:37.558546 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 01:17:37.562671 master-0 kubenswrapper[19803]: I0313 01:17:37.562594 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:37.575328 master-0 kubenswrapper[19803]: I0313 01:17:37.575193 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 01:17:37.681352 master-0 kubenswrapper[19803]: I0313 01:17:37.681264 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:37.685492 master-0 kubenswrapper[19803]: I0313 01:17:37.685441 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:37.763350 master-0 kubenswrapper[19803]: I0313 01:17:37.763241 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:37.770569 master-0 kubenswrapper[19803]: I0313 01:17:37.770468 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-79f8cd6fdd-kzq6q" Mar 13 01:17:37.840372 master-0 kubenswrapper[19803]: I0313 01:17:37.840163 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:38.112346 master-0 kubenswrapper[19803]: I0313 01:17:38.112097 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:38.117632 master-0 kubenswrapper[19803]: I0313 01:17:38.117573 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:17:38.301096 master-0 kubenswrapper[19803]: I0313 01:17:38.300936 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=8.300910961 podStartE2EDuration="8.300910961s" podCreationTimestamp="2026-03-13 01:17:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:38.206485117 +0000 UTC m=+6.171632886" watchObservedRunningTime="2026-03-13 01:17:38.300910961 +0000 UTC m=+6.266058670" Mar 13 01:17:38.443968 master-0 kubenswrapper[19803]: I0313 01:17:38.443769 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:38.444253 master-0 kubenswrapper[19803]: I0313 01:17:38.444018 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:38.444253 master-0 kubenswrapper[19803]: E0313 01:17:38.444054 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444253 master-0 kubenswrapper[19803]: I0313 01:17:38.444070 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:38.444253 master-0 kubenswrapper[19803]: E0313 01:17:38.444097 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444253 master-0 kubenswrapper[19803]: E0313 01:17:38.444180 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:40.444150764 +0000 UTC m=+8.409298483 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444499 master-0 kubenswrapper[19803]: E0313 01:17:38.444292 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444499 master-0 kubenswrapper[19803]: E0313 01:17:38.444320 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444499 master-0 kubenswrapper[19803]: E0313 01:17:38.444322 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444499 master-0 kubenswrapper[19803]: E0313 01:17:38.444367 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:40.444349028 +0000 UTC m=+8.409496717 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444499 master-0 kubenswrapper[19803]: E0313 01:17:38.444371 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:38.444499 master-0 kubenswrapper[19803]: E0313 01:17:38.444454 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:40.44442561 +0000 UTC m=+8.409573299 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:38.480910 master-0 kubenswrapper[19803]: I0313 01:17:38.480838 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:38.488677 master-0 kubenswrapper[19803]: I0313 01:17:38.488609 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7877bc66f6-sf5t2" Mar 13 01:17:38.554113 master-0 kubenswrapper[19803]: I0313 01:17:38.554017 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podStartSLOduration=5.5539981990000005 podStartE2EDuration="5.553998199s" podCreationTimestamp="2026-03-13 01:17:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:38.553220081 +0000 UTC m=+6.518367770" watchObservedRunningTime="2026-03-13 01:17:38.553998199 +0000 UTC m=+6.519145888" Mar 13 01:17:38.675751 master-0 kubenswrapper[19803]: I0313 01:17:38.675652 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:38.687204 master-0 kubenswrapper[19803]: I0313 01:17:38.687154 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-7dbfb86fbb-mc7xz" Mar 13 01:17:38.772192 master-0 kubenswrapper[19803]: I0313 01:17:38.772129 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:39.101064 master-0 kubenswrapper[19803]: I0313 01:17:39.100832 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:39.108170 master-0 kubenswrapper[19803]: I0313 01:17:39.108095 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-operator-admission-webhook-8464df8497-rhk4l" Mar 13 01:17:39.160868 master-0 kubenswrapper[19803]: I0313 01:17:39.160755 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:17:39.405099 master-0 kubenswrapper[19803]: I0313 01:17:39.404872 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:40.190572 master-0 kubenswrapper[19803]: I0313 01:17:40.190385 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:40.479395 master-0 kubenswrapper[19803]: I0313 01:17:40.479294 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: I0313 01:17:40.479672 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: I0313 01:17:40.479731 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.479813 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.479901 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.480008 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.480068 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.480094 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.480132 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.480104 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:44.480051144 +0000 UTC m=+12.445198983 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.480257 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:44.480219368 +0000 UTC m=+12.445367077 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:40.480379 master-0 kubenswrapper[19803]: E0313 01:17:40.480295 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:44.480283459 +0000 UTC m=+12.445431168 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:40.764278 master-0 kubenswrapper[19803]: I0313 01:17:40.764051 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:40.836758 master-0 kubenswrapper[19803]: I0313 01:17:40.836683 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d9nkp" Mar 13 01:17:41.017799 master-0 kubenswrapper[19803]: I0313 01:17:41.017567 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:41.370764 master-0 kubenswrapper[19803]: I0313 01:17:41.370558 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:41.373799 master-0 kubenswrapper[19803]: I0313 01:17:41.373745 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:41.373919 master-0 kubenswrapper[19803]: I0313 01:17:41.373848 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:17:41.454391 master-0 kubenswrapper[19803]: I0313 01:17:41.454327 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cx58l" Mar 13 01:17:41.479348 master-0 kubenswrapper[19803]: I0313 01:17:41.479250 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:41.491443 master-0 kubenswrapper[19803]: I0313 01:17:41.491367 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:41.684580 master-0 kubenswrapper[19803]: I0313 01:17:41.684396 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:17:42.571432 master-0 kubenswrapper[19803]: I0313 01:17:42.571347 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:42.579112 master-0 kubenswrapper[19803]: I0313 01:17:42.579060 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-c84d45cdc-rj5st" Mar 13 01:17:42.871972 master-0 kubenswrapper[19803]: I0313 01:17:42.871802 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:42.877470 master-0 kubenswrapper[19803]: I0313 01:17:42.877428 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:17:43.968347 master-0 kubenswrapper[19803]: I0313 01:17:43.968262 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:43.969066 master-0 kubenswrapper[19803]: I0313 01:17:43.968503 19803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:17:43.969066 master-0 kubenswrapper[19803]: I0313 01:17:43.968546 19803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:17:43.983716 master-0 kubenswrapper[19803]: I0313 01:17:43.983637 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:44.003463 master-0 kubenswrapper[19803]: I0313 01:17:44.003396 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:17:44.062370 master-0 kubenswrapper[19803]: I0313 01:17:44.062295 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:44.544806 master-0 kubenswrapper[19803]: I0313 01:17:44.544700 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:44.545088 master-0 kubenswrapper[19803]: I0313 01:17:44.545033 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:44.545291 master-0 kubenswrapper[19803]: E0313 01:17:44.545039 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545291 master-0 kubenswrapper[19803]: I0313 01:17:44.545228 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:44.545411 master-0 kubenswrapper[19803]: E0313 01:17:44.545300 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545411 master-0 kubenswrapper[19803]: E0313 01:17:44.545181 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545498 master-0 kubenswrapper[19803]: E0313 01:17:44.545426 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545498 master-0 kubenswrapper[19803]: E0313 01:17:44.545448 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545498 master-0 kubenswrapper[19803]: E0313 01:17:44.545456 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545644 master-0 kubenswrapper[19803]: E0313 01:17:44.545388 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.545359926 +0000 UTC m=+20.510507645 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545644 master-0 kubenswrapper[19803]: E0313 01:17:44.545546 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.545536321 +0000 UTC m=+20.510684000 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:44.545644 master-0 kubenswrapper[19803]: E0313 01:17:44.545561 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.545551181 +0000 UTC m=+20.510698860 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:44.693254 master-0 kubenswrapper[19803]: I0313 01:17:44.693180 19803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:17:45.009841 master-0 kubenswrapper[19803]: I0313 01:17:45.009769 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:45.067748 master-0 kubenswrapper[19803]: I0313 01:17:45.067690 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zglhp" Mar 13 01:17:45.241021 master-0 kubenswrapper[19803]: I0313 01:17:45.240946 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:17:45.751291 master-0 kubenswrapper[19803]: I0313 01:17:45.751240 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:45.754738 master-0 kubenswrapper[19803]: I0313 01:17:45.754690 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:17:49.480003 master-0 kubenswrapper[19803]: I0313 01:17:49.479931 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:49.716953 master-0 kubenswrapper[19803]: I0313 01:17:49.716888 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-64xrl" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.443841 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-4cbn4"] Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444151 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444165 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444178 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444184 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444217 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444225 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444236 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444245 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444256 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444262 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444272 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444293 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444305 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444312 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444320 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444327 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444334 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444340 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: E0313 01:17:51.444353 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444375 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444503 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="7106c6fe-7c8d-45b9-bc5c-521db743663f" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444543 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfb4407e-71fc-4684-aded-cc84f7e306dc" containerName="installer" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444554 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444564 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="kube-apiserver-insecure-readyz" Mar 13 01:17:51.444544 master-0 kubenswrapper[19803]: I0313 01:17:51.444576 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f77c8e18b751d90bc0dfe2d4e304050" containerName="setup" Mar 13 01:17:51.445838 master-0 kubenswrapper[19803]: I0313 01:17:51.444603 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="19460daa-7d22-4d32-899c-274b86c56a13" containerName="assisted-installer-controller" Mar 13 01:17:51.445838 master-0 kubenswrapper[19803]: I0313 01:17:51.444617 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="348e0611-5b3c-4238-a571-813fc16057df" containerName="prober" Mar 13 01:17:51.445838 master-0 kubenswrapper[19803]: I0313 01:17:51.444625 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1a56802af72ce1aac6b5077f1695ac0" containerName="kube-scheduler" Mar 13 01:17:51.445838 master-0 kubenswrapper[19803]: I0313 01:17:51.444633 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdcd8438-d33f-490f-a841-8944c58506f8" containerName="installer" Mar 13 01:17:51.445838 master-0 kubenswrapper[19803]: I0313 01:17:51.444643 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" containerName="installer" Mar 13 01:17:51.445838 master-0 kubenswrapper[19803]: I0313 01:17:51.445105 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.451032 master-0 kubenswrapper[19803]: I0313 01:17:51.450949 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 01:17:51.453104 master-0 kubenswrapper[19803]: I0313 01:17:51.451293 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 01:17:51.453104 master-0 kubenswrapper[19803]: I0313 01:17:51.451594 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 01:17:51.453104 master-0 kubenswrapper[19803]: I0313 01:17:51.452588 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 01:17:51.453104 master-0 kubenswrapper[19803]: I0313 01:17:51.452762 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-g9s2p" Mar 13 01:17:51.458925 master-0 kubenswrapper[19803]: I0313 01:17:51.458857 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 01:17:51.467058 master-0 kubenswrapper[19803]: I0313 01:17:51.466960 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-tz67l"] Mar 13 01:17:51.467952 master-0 kubenswrapper[19803]: I0313 01:17:51.467912 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.473481 master-0 kubenswrapper[19803]: I0313 01:17:51.473441 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 01:17:51.473562 master-0 kubenswrapper[19803]: I0313 01:17:51.473532 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 01:17:51.476450 master-0 kubenswrapper[19803]: I0313 01:17:51.476419 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-9krk7" Mar 13 01:17:51.482040 master-0 kubenswrapper[19803]: I0313 01:17:51.481994 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-4cbn4"] Mar 13 01:17:51.504388 master-0 kubenswrapper[19803]: I0313 01:17:51.504245 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-vp9bn"] Mar 13 01:17:51.506241 master-0 kubenswrapper[19803]: I0313 01:17:51.505185 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:51.509907 master-0 kubenswrapper[19803]: I0313 01:17:51.509881 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 01:17:51.510247 master-0 kubenswrapper[19803]: I0313 01:17:51.510216 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 01:17:51.510454 master-0 kubenswrapper[19803]: I0313 01:17:51.510442 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 01:17:51.510667 master-0 kubenswrapper[19803]: I0313 01:17:51.510655 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-ffc5m" Mar 13 01:17:51.525203 master-0 kubenswrapper[19803]: I0313 01:17:51.525149 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vp9bn"] Mar 13 01:17:51.556654 master-0 kubenswrapper[19803]: I0313 01:17:51.556591 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.559534 master-0 kubenswrapper[19803]: I0313 01:17:51.557337 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d1a41c-8533-4854-abea-ed42c4d7c71f-serving-cert\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.559534 master-0 kubenswrapper[19803]: I0313 01:17:51.557541 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz9xf\" (UniqueName: \"kubernetes.io/projected/a1d1a41c-8533-4854-abea-ed42c4d7c71f-kube-api-access-vz9xf\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.559534 master-0 kubenswrapper[19803]: I0313 01:17:51.557988 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-config\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.660538 master-0 kubenswrapper[19803]: I0313 01:17:51.660434 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-config\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.660538 master-0 kubenswrapper[19803]: I0313 01:17:51.660542 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9czm4\" (UniqueName: \"kubernetes.io/projected/ebf60543-fd92-4826-a16e-7e1ebfd95089-kube-api-access-9czm4\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:51.660878 master-0 kubenswrapper[19803]: I0313 01:17:51.660570 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5clr\" (UniqueName: \"kubernetes.io/projected/f1579f52-c608-4d4a-935f-c9b58b003160-kube-api-access-c5clr\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.660878 master-0 kubenswrapper[19803]: I0313 01:17:51.660631 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1579f52-c608-4d4a-935f-c9b58b003160-certs\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.661169 master-0 kubenswrapper[19803]: I0313 01:17:51.661126 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.661169 master-0 kubenswrapper[19803]: I0313 01:17:51.661164 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d1a41c-8533-4854-abea-ed42c4d7c71f-serving-cert\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.661276 master-0 kubenswrapper[19803]: I0313 01:17:51.661192 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1579f52-c608-4d4a-935f-c9b58b003160-node-bootstrap-token\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.661276 master-0 kubenswrapper[19803]: I0313 01:17:51.661217 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:51.661276 master-0 kubenswrapper[19803]: I0313 01:17:51.661249 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz9xf\" (UniqueName: \"kubernetes.io/projected/a1d1a41c-8533-4854-abea-ed42c4d7c71f-kube-api-access-vz9xf\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.661548 master-0 kubenswrapper[19803]: I0313 01:17:51.661479 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-config\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.661631 master-0 kubenswrapper[19803]: E0313 01:17:51.661608 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.161590398 +0000 UTC m=+20.126738077 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:17:51.662072 master-0 kubenswrapper[19803]: I0313 01:17:51.662016 19803 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Mar 13 01:17:51.666302 master-0 kubenswrapper[19803]: I0313 01:17:51.665915 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a1d1a41c-8533-4854-abea-ed42c4d7c71f-serving-cert\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:51.683401 master-0 kubenswrapper[19803]: E0313 01:17:51.683318 19803 projected.go:288] Couldn't get configMap openshift-console-operator/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 13 01:17:51.683401 master-0 kubenswrapper[19803]: E0313 01:17:51.683391 19803 projected.go:194] Error preparing data for projected volume kube-api-access-vz9xf for pod openshift-console-operator/console-operator-6c7fb6b958-4cbn4: configmap "kube-root-ca.crt" not found Mar 13 01:17:51.683710 master-0 kubenswrapper[19803]: E0313 01:17:51.683480 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a1d1a41c-8533-4854-abea-ed42c4d7c71f-kube-api-access-vz9xf podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.183452787 +0000 UTC m=+20.148600466 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-vz9xf" (UniqueName: "kubernetes.io/projected/a1d1a41c-8533-4854-abea-ed42c4d7c71f-kube-api-access-vz9xf") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap "kube-root-ca.crt" not found Mar 13 01:17:51.762678 master-0 kubenswrapper[19803]: I0313 01:17:51.762593 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9czm4\" (UniqueName: \"kubernetes.io/projected/ebf60543-fd92-4826-a16e-7e1ebfd95089-kube-api-access-9czm4\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:51.762678 master-0 kubenswrapper[19803]: I0313 01:17:51.762657 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5clr\" (UniqueName: \"kubernetes.io/projected/f1579f52-c608-4d4a-935f-c9b58b003160-kube-api-access-c5clr\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.763126 master-0 kubenswrapper[19803]: I0313 01:17:51.763066 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1579f52-c608-4d4a-935f-c9b58b003160-certs\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.763239 master-0 kubenswrapper[19803]: I0313 01:17:51.763207 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1579f52-c608-4d4a-935f-c9b58b003160-node-bootstrap-token\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.763284 master-0 kubenswrapper[19803]: I0313 01:17:51.763261 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:51.763630 master-0 kubenswrapper[19803]: E0313 01:17:51.763584 19803 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 01:17:51.763741 master-0 kubenswrapper[19803]: E0313 01:17:51.763710 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert podName:ebf60543-fd92-4826-a16e-7e1ebfd95089 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.263685017 +0000 UTC m=+20.228832696 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert") pod "ingress-canary-vp9bn" (UID: "ebf60543-fd92-4826-a16e-7e1ebfd95089") : secret "canary-serving-cert" not found Mar 13 01:17:51.767092 master-0 kubenswrapper[19803]: I0313 01:17:51.767049 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/f1579f52-c608-4d4a-935f-c9b58b003160-certs\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.767175 master-0 kubenswrapper[19803]: I0313 01:17:51.767123 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/f1579f52-c608-4d4a-935f-c9b58b003160-node-bootstrap-token\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.779142 master-0 kubenswrapper[19803]: I0313 01:17:51.779093 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5clr\" (UniqueName: \"kubernetes.io/projected/f1579f52-c608-4d4a-935f-c9b58b003160-kube-api-access-c5clr\") pod \"machine-config-server-tz67l\" (UID: \"f1579f52-c608-4d4a-935f-c9b58b003160\") " pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:51.779889 master-0 kubenswrapper[19803]: E0313 01:17:51.779697 19803 projected.go:288] Couldn't get configMap openshift-ingress-canary/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Mar 13 01:17:51.779889 master-0 kubenswrapper[19803]: E0313 01:17:51.779743 19803 projected.go:194] Error preparing data for projected volume kube-api-access-9czm4 for pod openshift-ingress-canary/ingress-canary-vp9bn: configmap "kube-root-ca.crt" not found Mar 13 01:17:51.779889 master-0 kubenswrapper[19803]: E0313 01:17:51.779820 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/ebf60543-fd92-4826-a16e-7e1ebfd95089-kube-api-access-9czm4 podName:ebf60543-fd92-4826-a16e-7e1ebfd95089 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.279796926 +0000 UTC m=+20.244944605 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-9czm4" (UniqueName: "kubernetes.io/projected/ebf60543-fd92-4826-a16e-7e1ebfd95089-kube-api-access-9czm4") pod "ingress-canary-vp9bn" (UID: "ebf60543-fd92-4826-a16e-7e1ebfd95089") : configmap "kube-root-ca.crt" not found Mar 13 01:17:51.808032 master-0 kubenswrapper[19803]: I0313 01:17:51.807944 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-tz67l" Mar 13 01:17:52.162291 master-0 kubenswrapper[19803]: I0313 01:17:52.162147 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr"] Mar 13 01:17:52.163477 master-0 kubenswrapper[19803]: I0313 01:17:52.163446 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.166041 master-0 kubenswrapper[19803]: I0313 01:17:52.166000 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9pdlp" Mar 13 01:17:52.166173 master-0 kubenswrapper[19803]: I0313 01:17:52.166117 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 01:17:52.166227 master-0 kubenswrapper[19803]: I0313 01:17:52.166195 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 01:17:52.166598 master-0 kubenswrapper[19803]: I0313 01:17:52.166571 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 01:17:52.169578 master-0 kubenswrapper[19803]: I0313 01:17:52.169538 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:52.169720 master-0 kubenswrapper[19803]: I0313 01:17:52.169593 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.169720 master-0 kubenswrapper[19803]: I0313 01:17:52.169651 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pm9ts\" (UniqueName: \"kubernetes.io/projected/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-kube-api-access-pm9ts\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.169720 master-0 kubenswrapper[19803]: I0313 01:17:52.169711 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.169873 master-0 kubenswrapper[19803]: I0313 01:17:52.169758 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.169967 master-0 kubenswrapper[19803]: E0313 01:17:52.169933 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:53.169915158 +0000 UTC m=+21.135062837 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:17:52.185616 master-0 kubenswrapper[19803]: I0313 01:17:52.185531 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr"] Mar 13 01:17:52.271667 master-0 kubenswrapper[19803]: I0313 01:17:52.271598 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.271667 master-0 kubenswrapper[19803]: I0313 01:17:52.271671 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: I0313 01:17:52.271710 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pm9ts\" (UniqueName: \"kubernetes.io/projected/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-kube-api-access-pm9ts\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: I0313 01:17:52.271741 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz9xf\" (UniqueName: \"kubernetes.io/projected/a1d1a41c-8533-4854-abea-ed42c4d7c71f-kube-api-access-vz9xf\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: E0313 01:17:52.271762 19803 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: I0313 01:17:52.271784 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: E0313 01:17:52.271838 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls podName:6b5aa4fd-67eb-4d3b-a06e-90afa825eb41 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:52.771818051 +0000 UTC m=+20.736965740 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-tqxdr" (UID: "6b5aa4fd-67eb-4d3b-a06e-90afa825eb41") : secret "prometheus-operator-tls" not found Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: I0313 01:17:52.271864 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: E0313 01:17:52.271887 19803 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 01:17:52.272023 master-0 kubenswrapper[19803]: E0313 01:17:52.271961 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert podName:ebf60543-fd92-4826-a16e-7e1ebfd95089 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:53.271942814 +0000 UTC m=+21.237090503 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert") pod "ingress-canary-vp9bn" (UID: "ebf60543-fd92-4826-a16e-7e1ebfd95089") : secret "canary-serving-cert" not found Mar 13 01:17:52.272930 master-0 kubenswrapper[19803]: I0313 01:17:52.272888 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-metrics-client-ca\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.276106 master-0 kubenswrapper[19803]: I0313 01:17:52.276049 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-kube-rbac-proxy-config\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.276649 master-0 kubenswrapper[19803]: I0313 01:17:52.276594 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz9xf\" (UniqueName: \"kubernetes.io/projected/a1d1a41c-8533-4854-abea-ed42c4d7c71f-kube-api-access-vz9xf\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:52.298130 master-0 kubenswrapper[19803]: I0313 01:17:52.298047 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pm9ts\" (UniqueName: \"kubernetes.io/projected/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-kube-api-access-pm9ts\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.373858 master-0 kubenswrapper[19803]: I0313 01:17:52.373709 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9czm4\" (UniqueName: \"kubernetes.io/projected/ebf60543-fd92-4826-a16e-7e1ebfd95089-kube-api-access-9czm4\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:52.377396 master-0 kubenswrapper[19803]: I0313 01:17:52.377356 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9czm4\" (UniqueName: \"kubernetes.io/projected/ebf60543-fd92-4826-a16e-7e1ebfd95089-kube-api-access-9czm4\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:52.576321 master-0 kubenswrapper[19803]: I0313 01:17:52.576253 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:17:52.576321 master-0 kubenswrapper[19803]: I0313 01:17:52.576327 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: I0313 01:17:52.576345 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576469 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576487 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576551 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576562 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576484 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576615 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576616 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:08.576591649 +0000 UTC m=+36.541739328 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576663 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:18:08.57664743 +0000 UTC m=+36.541795109 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:17:52.577077 master-0 kubenswrapper[19803]: E0313 01:17:52.576674 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:08.576668801 +0000 UTC m=+36.541816470 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:17:52.743116 master-0 kubenswrapper[19803]: I0313 01:17:52.743034 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tz67l" event={"ID":"f1579f52-c608-4d4a-935f-c9b58b003160","Type":"ContainerStarted","Data":"0f53b50e4a55c085ae0df2b7344206f363c81b13e9ae1af015ea9554516ec1ee"} Mar 13 01:17:52.743365 master-0 kubenswrapper[19803]: I0313 01:17:52.743128 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-tz67l" event={"ID":"f1579f52-c608-4d4a-935f-c9b58b003160","Type":"ContainerStarted","Data":"dff66d89745b70773d284fd598000f5ed5479ef853c8b900256bc454023963fe"} Mar 13 01:17:52.763336 master-0 kubenswrapper[19803]: I0313 01:17:52.763255 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-tz67l" podStartSLOduration=1.763234932 podStartE2EDuration="1.763234932s" podCreationTimestamp="2026-03-13 01:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:17:52.761592382 +0000 UTC m=+20.726740101" watchObservedRunningTime="2026-03-13 01:17:52.763234932 +0000 UTC m=+20.728382601" Mar 13 01:17:52.779024 master-0 kubenswrapper[19803]: I0313 01:17:52.778959 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:52.779680 master-0 kubenswrapper[19803]: E0313 01:17:52.779597 19803 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 01:17:52.779810 master-0 kubenswrapper[19803]: E0313 01:17:52.779774 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls podName:6b5aa4fd-67eb-4d3b-a06e-90afa825eb41 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:53.77973633 +0000 UTC m=+21.744884049 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-tqxdr" (UID: "6b5aa4fd-67eb-4d3b-a06e-90afa825eb41") : secret "prometheus-operator-tls" not found Mar 13 01:17:53.193118 master-0 kubenswrapper[19803]: I0313 01:17:53.192394 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:53.193118 master-0 kubenswrapper[19803]: E0313 01:17:53.192633 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:55.192615032 +0000 UTC m=+23.157762711 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:17:53.295466 master-0 kubenswrapper[19803]: I0313 01:17:53.294396 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:53.295466 master-0 kubenswrapper[19803]: E0313 01:17:53.294779 19803 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 01:17:53.295466 master-0 kubenswrapper[19803]: E0313 01:17:53.294912 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert podName:ebf60543-fd92-4826-a16e-7e1ebfd95089 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:55.294882722 +0000 UTC m=+23.260030471 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert") pod "ingress-canary-vp9bn" (UID: "ebf60543-fd92-4826-a16e-7e1ebfd95089") : secret "canary-serving-cert" not found Mar 13 01:17:53.800725 master-0 kubenswrapper[19803]: I0313 01:17:53.800582 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:53.801255 master-0 kubenswrapper[19803]: E0313 01:17:53.800745 19803 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 01:17:53.801255 master-0 kubenswrapper[19803]: E0313 01:17:53.800820 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls podName:6b5aa4fd-67eb-4d3b-a06e-90afa825eb41 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:55.800803269 +0000 UTC m=+23.765950938 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-tqxdr" (UID: "6b5aa4fd-67eb-4d3b-a06e-90afa825eb41") : secret "prometheus-operator-tls" not found Mar 13 01:17:54.673978 master-0 kubenswrapper[19803]: I0313 01:17:54.673875 19803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 01:17:54.674296 master-0 kubenswrapper[19803]: I0313 01:17:54.674197 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" containerID="cri-o://1343b3441a72fc54f57c90f1ad8e6009baa9cad0afaf07655566864af4172871" gracePeriod=5 Mar 13 01:17:55.216947 master-0 kubenswrapper[19803]: I0313 01:17:55.216831 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:55.217463 master-0 kubenswrapper[19803]: E0313 01:17:55.217034 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:17:59.217010784 +0000 UTC m=+27.182158463 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:17:55.318758 master-0 kubenswrapper[19803]: I0313 01:17:55.318666 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:55.319070 master-0 kubenswrapper[19803]: E0313 01:17:55.319000 19803 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 01:17:55.319187 master-0 kubenswrapper[19803]: E0313 01:17:55.319162 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert podName:ebf60543-fd92-4826-a16e-7e1ebfd95089 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:59.31912809 +0000 UTC m=+27.284275809 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert") pod "ingress-canary-vp9bn" (UID: "ebf60543-fd92-4826-a16e-7e1ebfd95089") : secret "canary-serving-cert" not found Mar 13 01:17:55.830418 master-0 kubenswrapper[19803]: I0313 01:17:55.830323 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:55.830678 master-0 kubenswrapper[19803]: E0313 01:17:55.830547 19803 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 01:17:55.830678 master-0 kubenswrapper[19803]: E0313 01:17:55.830625 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls podName:6b5aa4fd-67eb-4d3b-a06e-90afa825eb41 nodeName:}" failed. No retries permitted until 2026-03-13 01:17:59.83060508 +0000 UTC m=+27.795752759 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-tqxdr" (UID: "6b5aa4fd-67eb-4d3b-a06e-90afa825eb41") : secret "prometheus-operator-tls" not found Mar 13 01:17:59.287052 master-0 kubenswrapper[19803]: I0313 01:17:59.286932 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:17:59.288435 master-0 kubenswrapper[19803]: E0313 01:17:59.287218 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:18:07.287148632 +0000 UTC m=+35.252296341 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:17:59.389183 master-0 kubenswrapper[19803]: I0313 01:17:59.389048 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:17:59.389703 master-0 kubenswrapper[19803]: E0313 01:17:59.389294 19803 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: secret "canary-serving-cert" not found Mar 13 01:17:59.389703 master-0 kubenswrapper[19803]: E0313 01:17:59.389443 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert podName:ebf60543-fd92-4826-a16e-7e1ebfd95089 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:07.389393271 +0000 UTC m=+35.354540960 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert") pod "ingress-canary-vp9bn" (UID: "ebf60543-fd92-4826-a16e-7e1ebfd95089") : secret "canary-serving-cert" not found Mar 13 01:17:59.803101 master-0 kubenswrapper[19803]: I0313 01:17:59.803007 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_f417e14665db2ffffa887ce21c9ff0ed/startup-monitor/0.log" Mar 13 01:17:59.803602 master-0 kubenswrapper[19803]: I0313 01:17:59.803112 19803 generic.go:334] "Generic (PLEG): container finished" podID="f417e14665db2ffffa887ce21c9ff0ed" containerID="1343b3441a72fc54f57c90f1ad8e6009baa9cad0afaf07655566864af4172871" exitCode=137 Mar 13 01:17:59.898835 master-0 kubenswrapper[19803]: I0313 01:17:59.898750 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:17:59.899281 master-0 kubenswrapper[19803]: E0313 01:17:59.899030 19803 secret.go:189] Couldn't get secret openshift-monitoring/prometheus-operator-tls: secret "prometheus-operator-tls" not found Mar 13 01:17:59.899281 master-0 kubenswrapper[19803]: E0313 01:17:59.899186 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls podName:6b5aa4fd-67eb-4d3b-a06e-90afa825eb41 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:07.899155659 +0000 UTC m=+35.864303378 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-operator-tls" (UniqueName: "kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls") pod "prometheus-operator-5ff8674d55-tqxdr" (UID: "6b5aa4fd-67eb-4d3b-a06e-90afa825eb41") : secret "prometheus-operator-tls" not found Mar 13 01:18:00.241490 master-0 kubenswrapper[19803]: I0313 01:18:00.241434 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_f417e14665db2ffffa887ce21c9ff0ed/startup-monitor/0.log" Mar 13 01:18:00.241761 master-0 kubenswrapper[19803]: I0313 01:18:00.241547 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:18:00.321229 master-0 kubenswrapper[19803]: I0313 01:18:00.321151 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="" Mar 13 01:18:00.341057 master-0 kubenswrapper[19803]: I0313 01:18:00.340956 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 01:18:00.341057 master-0 kubenswrapper[19803]: I0313 01:18:00.341008 19803 kubelet.go:2649] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="6237245c-3a04-47e2-861d-7d2e77d416bd" Mar 13 01:18:00.341057 master-0 kubenswrapper[19803]: I0313 01:18:00.341028 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 01:18:00.341057 master-0 kubenswrapper[19803]: I0313 01:18:00.341038 19803 kubelet.go:2673] "Unable to find pod for mirror pod, skipping" mirrorPod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" mirrorPodUID="6237245c-3a04-47e2-861d-7d2e77d416bd" Mar 13 01:18:00.406608 master-0 kubenswrapper[19803]: I0313 01:18:00.406492 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 01:18:00.406608 master-0 kubenswrapper[19803]: I0313 01:18:00.406609 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 01:18:00.406881 master-0 kubenswrapper[19803]: I0313 01:18:00.406671 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 01:18:00.406881 master-0 kubenswrapper[19803]: I0313 01:18:00.406729 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 01:18:00.406881 master-0 kubenswrapper[19803]: I0313 01:18:00.406763 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests" (OuterVolumeSpecName: "manifests") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:18:00.406881 master-0 kubenswrapper[19803]: I0313 01:18:00.406825 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log" (OuterVolumeSpecName: "var-log") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:18:00.407007 master-0 kubenswrapper[19803]: I0313 01:18:00.406909 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:18:00.407007 master-0 kubenswrapper[19803]: I0313 01:18:00.406947 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") pod \"f417e14665db2ffffa887ce21c9ff0ed\" (UID: \"f417e14665db2ffffa887ce21c9ff0ed\") " Mar 13 01:18:00.407068 master-0 kubenswrapper[19803]: I0313 01:18:00.407029 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock" (OuterVolumeSpecName: "var-lock") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:18:00.407488 master-0 kubenswrapper[19803]: I0313 01:18:00.407452 19803 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 01:18:00.407554 master-0 kubenswrapper[19803]: I0313 01:18:00.407486 19803 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 01:18:00.407554 master-0 kubenswrapper[19803]: I0313 01:18:00.407505 19803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:18:00.407616 master-0 kubenswrapper[19803]: I0313 01:18:00.407551 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:18:00.409374 master-0 kubenswrapper[19803]: I0313 01:18:00.409332 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:18:00.409638 master-0 kubenswrapper[19803]: I0313 01:18:00.409605 19803 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 13 01:18:00.412548 master-0 kubenswrapper[19803]: I0313 01:18:00.411574 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f417e14665db2ffffa887ce21c9ff0ed" (UID: "f417e14665db2ffffa887ce21c9ff0ed"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:18:00.426084 master-0 kubenswrapper[19803]: I0313 01:18:00.426050 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nlhbx" Mar 13 01:18:00.509222 master-0 kubenswrapper[19803]: I0313 01:18:00.509157 19803 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f417e14665db2ffffa887ce21c9ff0ed-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:18:00.814132 master-0 kubenswrapper[19803]: I0313 01:18:00.813989 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_f417e14665db2ffffa887ce21c9ff0ed/startup-monitor/0.log" Mar 13 01:18:00.815165 master-0 kubenswrapper[19803]: I0313 01:18:00.815126 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:18:00.819001 master-0 kubenswrapper[19803]: I0313 01:18:00.818934 19803 scope.go:117] "RemoveContainer" containerID="1343b3441a72fc54f57c90f1ad8e6009baa9cad0afaf07655566864af4172871" Mar 13 01:18:02.334165 master-0 kubenswrapper[19803]: I0313 01:18:02.334022 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f417e14665db2ffffa887ce21c9ff0ed" path="/var/lib/kubelet/pods/f417e14665db2ffffa887ce21c9ff0ed/volumes" Mar 13 01:18:07.313410 master-0 kubenswrapper[19803]: I0313 01:18:07.313316 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:18:07.314743 master-0 kubenswrapper[19803]: E0313 01:18:07.313554 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:18:23.313526767 +0000 UTC m=+51.278674446 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:07.416501 master-0 kubenswrapper[19803]: I0313 01:18:07.416363 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:18:07.422489 master-0 kubenswrapper[19803]: I0313 01:18:07.422438 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/ebf60543-fd92-4826-a16e-7e1ebfd95089-cert\") pod \"ingress-canary-vp9bn\" (UID: \"ebf60543-fd92-4826-a16e-7e1ebfd95089\") " pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:18:07.434391 master-0 kubenswrapper[19803]: I0313 01:18:07.434364 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-vp9bn" Mar 13 01:18:07.875420 master-0 kubenswrapper[19803]: I0313 01:18:07.875261 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-vp9bn"] Mar 13 01:18:07.888839 master-0 kubenswrapper[19803]: W0313 01:18:07.888735 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podebf60543_fd92_4826_a16e_7e1ebfd95089.slice/crio-0cbeb894cfd483e4a20064272498690cecc0c595bef6e1f59e97dc97f7ed15be WatchSource:0}: Error finding container 0cbeb894cfd483e4a20064272498690cecc0c595bef6e1f59e97dc97f7ed15be: Status 404 returned error can't find the container with id 0cbeb894cfd483e4a20064272498690cecc0c595bef6e1f59e97dc97f7ed15be Mar 13 01:18:07.931258 master-0 kubenswrapper[19803]: I0313 01:18:07.931222 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:18:07.935060 master-0 kubenswrapper[19803]: I0313 01:18:07.935017 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-operator-tls\" (UniqueName: \"kubernetes.io/secret/6b5aa4fd-67eb-4d3b-a06e-90afa825eb41-prometheus-operator-tls\") pod \"prometheus-operator-5ff8674d55-tqxdr\" (UID: \"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41\") " pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:18:08.084619 master-0 kubenswrapper[19803]: I0313 01:18:08.084551 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" Mar 13 01:18:08.570083 master-0 kubenswrapper[19803]: I0313 01:18:08.570021 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr"] Mar 13 01:18:08.578337 master-0 kubenswrapper[19803]: W0313 01:18:08.578294 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b5aa4fd_67eb_4d3b_a06e_90afa825eb41.slice/crio-2013ad69fbc91c4e3a7fc99bda519d5d4a891620ea156ac304049ee8931b29cd WatchSource:0}: Error finding container 2013ad69fbc91c4e3a7fc99bda519d5d4a891620ea156ac304049ee8931b29cd: Status 404 returned error can't find the container with id 2013ad69fbc91c4e3a7fc99bda519d5d4a891620ea156ac304049ee8931b29cd Mar 13 01:18:08.581380 master-0 kubenswrapper[19803]: I0313 01:18:08.580931 19803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 01:18:08.665666 master-0 kubenswrapper[19803]: I0313 01:18:08.665300 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:18:08.665666 master-0 kubenswrapper[19803]: I0313 01:18:08.665392 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:18:08.665666 master-0 kubenswrapper[19803]: I0313 01:18:08.665410 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:18:08.665666 master-0 kubenswrapper[19803]: E0313 01:18:08.665454 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:18:08.665666 master-0 kubenswrapper[19803]: E0313 01:18:08.665479 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:18:08.665666 master-0 kubenswrapper[19803]: E0313 01:18:08.665540 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:18:40.665524772 +0000 UTC m=+68.630672451 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:18:08.666203 master-0 kubenswrapper[19803]: E0313 01:18:08.665773 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:18:08.666203 master-0 kubenswrapper[19803]: E0313 01:18:08.665761 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:18:08.666203 master-0 kubenswrapper[19803]: E0313 01:18:08.665869 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:18:08.666203 master-0 kubenswrapper[19803]: E0313 01:18:08.665788 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:18:08.666203 master-0 kubenswrapper[19803]: E0313 01:18:08.665981 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:40.665939173 +0000 UTC m=+68.631086882 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:18:08.666203 master-0 kubenswrapper[19803]: E0313 01:18:08.666013 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:40.665999674 +0000 UTC m=+68.631147383 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:18:08.875229 master-0 kubenswrapper[19803]: I0313 01:18:08.875013 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vp9bn" event={"ID":"ebf60543-fd92-4826-a16e-7e1ebfd95089","Type":"ContainerStarted","Data":"e477609ab9f460c654c2328ea19660cd65e7ee567e8725bbb04a88e89876a39d"} Mar 13 01:18:08.875229 master-0 kubenswrapper[19803]: I0313 01:18:08.875095 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-vp9bn" event={"ID":"ebf60543-fd92-4826-a16e-7e1ebfd95089","Type":"ContainerStarted","Data":"0cbeb894cfd483e4a20064272498690cecc0c595bef6e1f59e97dc97f7ed15be"} Mar 13 01:18:08.876151 master-0 kubenswrapper[19803]: I0313 01:18:08.876092 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" event={"ID":"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41","Type":"ContainerStarted","Data":"2013ad69fbc91c4e3a7fc99bda519d5d4a891620ea156ac304049ee8931b29cd"} Mar 13 01:18:08.897362 master-0 kubenswrapper[19803]: I0313 01:18:08.897274 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-vp9bn" podStartSLOduration=17.897251898 podStartE2EDuration="17.897251898s" podCreationTimestamp="2026-03-13 01:17:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:18:08.896635242 +0000 UTC m=+36.861782931" watchObservedRunningTime="2026-03-13 01:18:08.897251898 +0000 UTC m=+36.862399577" Mar 13 01:18:09.164941 master-0 kubenswrapper[19803]: I0313 01:18:09.164785 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:18:10.890993 master-0 kubenswrapper[19803]: I0313 01:18:10.890929 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" event={"ID":"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41","Type":"ContainerStarted","Data":"f23464f0d82b82e670a834a7f717af1f8db5e51cf6c4321109ce6dd089443978"} Mar 13 01:18:10.890993 master-0 kubenswrapper[19803]: I0313 01:18:10.890990 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" event={"ID":"6b5aa4fd-67eb-4d3b-a06e-90afa825eb41","Type":"ContainerStarted","Data":"2a44ae7e0ae66d00b623f7f01d57045d099ec08847ef551c315c3a58a688e41e"} Mar 13 01:18:10.911491 master-0 kubenswrapper[19803]: I0313 01:18:10.911408 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-operator-5ff8674d55-tqxdr" podStartSLOduration=17.658052598 podStartE2EDuration="18.911388281s" podCreationTimestamp="2026-03-13 01:17:52 +0000 UTC" firstStartedPulling="2026-03-13 01:18:08.580863398 +0000 UTC m=+36.546011077" lastFinishedPulling="2026-03-13 01:18:09.834199081 +0000 UTC m=+37.799346760" observedRunningTime="2026-03-13 01:18:10.910013408 +0000 UTC m=+38.875161087" watchObservedRunningTime="2026-03-13 01:18:10.911388281 +0000 UTC m=+38.876535960" Mar 13 01:18:12.584965 master-0 kubenswrapper[19803]: I0313 01:18:12.584918 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/node-exporter-85xmz"] Mar 13 01:18:12.585646 master-0 kubenswrapper[19803]: E0313 01:18:12.585187 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" Mar 13 01:18:12.585646 master-0 kubenswrapper[19803]: I0313 01:18:12.585200 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" Mar 13 01:18:12.585646 master-0 kubenswrapper[19803]: I0313 01:18:12.585329 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f417e14665db2ffffa887ce21c9ff0ed" containerName="startup-monitor" Mar 13 01:18:12.586284 master-0 kubenswrapper[19803]: I0313 01:18:12.586111 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.588862 master-0 kubenswrapper[19803]: I0313 01:18:12.588817 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-75ldw" Mar 13 01:18:12.594032 master-0 kubenswrapper[19803]: I0313 01:18:12.593982 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 01:18:12.594201 master-0 kubenswrapper[19803]: I0313 01:18:12.593997 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 01:18:12.606224 master-0 kubenswrapper[19803]: I0313 01:18:12.606177 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5"] Mar 13 01:18:12.607287 master-0 kubenswrapper[19803]: I0313 01:18:12.607258 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.610388 master-0 kubenswrapper[19803]: I0313 01:18:12.610341 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cjzjq" Mar 13 01:18:12.610777 master-0 kubenswrapper[19803]: I0313 01:18:12.610751 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 01:18:12.610926 master-0 kubenswrapper[19803]: I0313 01:18:12.610903 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 01:18:12.630016 master-0 kubenswrapper[19803]: I0313 01:18:12.629934 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5"] Mar 13 01:18:12.689306 master-0 kubenswrapper[19803]: I0313 01:18:12.685562 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf"] Mar 13 01:18:12.689306 master-0 kubenswrapper[19803]: I0313 01:18:12.686780 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.692990 master-0 kubenswrapper[19803]: I0313 01:18:12.692665 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 01:18:12.692990 master-0 kubenswrapper[19803]: I0313 01:18:12.692778 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 01:18:12.693199 master-0 kubenswrapper[19803]: I0313 01:18:12.693134 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-hxxzs" Mar 13 01:18:12.693672 master-0 kubenswrapper[19803]: I0313 01:18:12.693236 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 01:18:12.704033 master-0 kubenswrapper[19803]: I0313 01:18:12.703948 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf"] Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.726988 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-wtmp\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727064 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e147d06-d872-4691-95f8-b9d8b7584780-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727092 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-tls\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727146 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dshxx\" (UniqueName: \"kubernetes.io/projected/5e147d06-d872-4691-95f8-b9d8b7584780-kube-api-access-dshxx\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727169 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-metrics-client-ca\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727192 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-sys\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727208 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e147d06-d872-4691-95f8-b9d8b7584780-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727233 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llw7w\" (UniqueName: \"kubernetes.io/projected/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-kube-api-access-llw7w\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727259 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e147d06-d872-4691-95f8-b9d8b7584780-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727296 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-textfile\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727317 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.728200 master-0 kubenswrapper[19803]: I0313 01:18:12.727337 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-root\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.818148 master-0 kubenswrapper[19803]: I0313 01:18:12.818090 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-r6jcs"] Mar 13 01:18:12.819064 master-0 kubenswrapper[19803]: I0313 01:18:12.819032 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:12.824589 master-0 kubenswrapper[19803]: I0313 01:18:12.824546 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-7ghfl" Mar 13 01:18:12.824817 master-0 kubenswrapper[19803]: I0313 01:18:12.824794 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 01:18:12.828253 master-0 kubenswrapper[19803]: I0313 01:18:12.828208 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zqcd\" (UniqueName: \"kubernetes.io/projected/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-api-access-6zqcd\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.828359 master-0 kubenswrapper[19803]: I0313 01:18:12.828258 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dshxx\" (UniqueName: \"kubernetes.io/projected/5e147d06-d872-4691-95f8-b9d8b7584780-kube-api-access-dshxx\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.828359 master-0 kubenswrapper[19803]: I0313 01:18:12.828316 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-metrics-client-ca\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828457 master-0 kubenswrapper[19803]: I0313 01:18:12.828368 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-sys\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828457 master-0 kubenswrapper[19803]: I0313 01:18:12.828397 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e147d06-d872-4691-95f8-b9d8b7584780-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.828457 master-0 kubenswrapper[19803]: I0313 01:18:12.828424 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llw7w\" (UniqueName: \"kubernetes.io/projected/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-kube-api-access-llw7w\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828457 master-0 kubenswrapper[19803]: I0313 01:18:12.828446 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.828661 master-0 kubenswrapper[19803]: I0313 01:18:12.828465 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ef69514-736d-44ba-a5e9-703bd06d52a8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.828661 master-0 kubenswrapper[19803]: I0313 01:18:12.828489 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e147d06-d872-4691-95f8-b9d8b7584780-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.828661 master-0 kubenswrapper[19803]: I0313 01:18:12.828546 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-textfile\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828661 master-0 kubenswrapper[19803]: I0313 01:18:12.828577 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828661 master-0 kubenswrapper[19803]: I0313 01:18:12.828596 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1ef69514-736d-44ba-a5e9-703bd06d52a8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.828661 master-0 kubenswrapper[19803]: I0313 01:18:12.828617 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-root\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828661 master-0 kubenswrapper[19803]: I0313 01:18:12.828653 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.828988 master-0 kubenswrapper[19803]: I0313 01:18:12.828675 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-wtmp\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828988 master-0 kubenswrapper[19803]: I0313 01:18:12.828699 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e147d06-d872-4691-95f8-b9d8b7584780-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.828988 master-0 kubenswrapper[19803]: I0313 01:18:12.828720 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-tls\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.828988 master-0 kubenswrapper[19803]: I0313 01:18:12.828751 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.829137 master-0 kubenswrapper[19803]: I0313 01:18:12.829003 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"root\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-root\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.829265 master-0 kubenswrapper[19803]: I0313 01:18:12.829232 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/5e147d06-d872-4691-95f8-b9d8b7584780-metrics-client-ca\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.829555 master-0 kubenswrapper[19803]: I0313 01:18:12.829485 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-wtmp\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-wtmp\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.830031 master-0 kubenswrapper[19803]: E0313 01:18:12.829980 19803 secret.go:189] Couldn't get secret openshift-monitoring/node-exporter-tls: secret "node-exporter-tls" not found Mar 13 01:18:12.830113 master-0 kubenswrapper[19803]: E0313 01:18:12.830086 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-tls podName:d948e0c5-a593-4fe0-bc58-8f157cd5ae1b nodeName:}" failed. No retries permitted until 2026-03-13 01:18:13.33005949 +0000 UTC m=+41.295207269 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-exporter-tls" (UniqueName: "kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-tls") pod "node-exporter-85xmz" (UID: "d948e0c5-a593-4fe0-bc58-8f157cd5ae1b") : secret "node-exporter-tls" not found Mar 13 01:18:12.830338 master-0 kubenswrapper[19803]: I0313 01:18:12.830312 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-textfile\" (UniqueName: \"kubernetes.io/empty-dir/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-textfile\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.830605 master-0 kubenswrapper[19803]: I0313 01:18:12.830571 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-sys\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.830964 master-0 kubenswrapper[19803]: I0313 01:18:12.830936 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-metrics-client-ca\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.834105 master-0 kubenswrapper[19803]: I0313 01:18:12.834074 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/5e147d06-d872-4691-95f8-b9d8b7584780-openshift-state-metrics-kube-rbac-proxy-config\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.834743 master-0 kubenswrapper[19803]: I0313 01:18:12.834682 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e147d06-d872-4691-95f8-b9d8b7584780-openshift-state-metrics-tls\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.836583 master-0 kubenswrapper[19803]: I0313 01:18:12.835238 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-kube-rbac-proxy-config\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.861586 master-0 kubenswrapper[19803]: I0313 01:18:12.859332 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dshxx\" (UniqueName: \"kubernetes.io/projected/5e147d06-d872-4691-95f8-b9d8b7584780-kube-api-access-dshxx\") pod \"openshift-state-metrics-74cc79fd76-fpgj5\" (UID: \"5e147d06-d872-4691-95f8-b9d8b7584780\") " pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:12.862328 master-0 kubenswrapper[19803]: I0313 01:18:12.862289 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llw7w\" (UniqueName: \"kubernetes.io/projected/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-kube-api-access-llw7w\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:12.929502 master-0 kubenswrapper[19803]: I0313 01:18:12.929452 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.929502 master-0 kubenswrapper[19803]: I0313 01:18:12.929522 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a557547-de25-4165-a4f5-370b54cd7f70-host\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:12.929808 master-0 kubenswrapper[19803]: I0313 01:18:12.929543 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0a557547-de25-4165-a4f5-370b54cd7f70-serviceca\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:12.929808 master-0 kubenswrapper[19803]: I0313 01:18:12.929563 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zqcd\" (UniqueName: \"kubernetes.io/projected/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-api-access-6zqcd\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.929808 master-0 kubenswrapper[19803]: I0313 01:18:12.929607 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.929808 master-0 kubenswrapper[19803]: I0313 01:18:12.929719 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ef69514-736d-44ba-a5e9-703bd06d52a8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.930061 master-0 kubenswrapper[19803]: I0313 01:18:12.930036 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1ef69514-736d-44ba-a5e9-703bd06d52a8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.930122 master-0 kubenswrapper[19803]: I0313 01:18:12.930070 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xv9rl\" (UniqueName: \"kubernetes.io/projected/0a557547-de25-4165-a4f5-370b54cd7f70-kube-api-access-xv9rl\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:12.930122 master-0 kubenswrapper[19803]: I0313 01:18:12.930109 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.930925 master-0 kubenswrapper[19803]: I0313 01:18:12.930460 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-custom-resource-state-configmap\" (UniqueName: \"kubernetes.io/configmap/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-custom-resource-state-configmap\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.930925 master-0 kubenswrapper[19803]: I0313 01:18:12.930856 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"volume-directive-shadow\" (UniqueName: \"kubernetes.io/empty-dir/1ef69514-736d-44ba-a5e9-703bd06d52a8-volume-directive-shadow\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.931243 master-0 kubenswrapper[19803]: I0313 01:18:12.931178 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/1ef69514-736d-44ba-a5e9-703bd06d52a8-metrics-client-ca\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.935137 master-0 kubenswrapper[19803]: I0313 01:18:12.935091 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-kube-rbac-proxy-config\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.935417 master-0 kubenswrapper[19803]: I0313 01:18:12.935382 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls\" (UniqueName: \"kubernetes.io/secret/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-state-metrics-tls\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.948501 master-0 kubenswrapper[19803]: I0313 01:18:12.948432 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zqcd\" (UniqueName: \"kubernetes.io/projected/1ef69514-736d-44ba-a5e9-703bd06d52a8-kube-api-access-6zqcd\") pod \"kube-state-metrics-68b88f8cb5-6w4pf\" (UID: \"1ef69514-736d-44ba-a5e9-703bd06d52a8\") " pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:12.970537 master-0 kubenswrapper[19803]: I0313 01:18:12.969981 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" Mar 13 01:18:13.016542 master-0 kubenswrapper[19803]: I0313 01:18:13.015931 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" Mar 13 01:18:13.034540 master-0 kubenswrapper[19803]: I0313 01:18:13.031804 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a557547-de25-4165-a4f5-370b54cd7f70-host\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:13.034540 master-0 kubenswrapper[19803]: I0313 01:18:13.031848 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0a557547-de25-4165-a4f5-370b54cd7f70-serviceca\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:13.034540 master-0 kubenswrapper[19803]: I0313 01:18:13.031967 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/0a557547-de25-4165-a4f5-370b54cd7f70-host\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:13.034540 master-0 kubenswrapper[19803]: I0313 01:18:13.032141 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xv9rl\" (UniqueName: \"kubernetes.io/projected/0a557547-de25-4165-a4f5-370b54cd7f70-kube-api-access-xv9rl\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:13.034540 master-0 kubenswrapper[19803]: I0313 01:18:13.032964 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/0a557547-de25-4165-a4f5-370b54cd7f70-serviceca\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:13.060996 master-0 kubenswrapper[19803]: I0313 01:18:13.056292 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xv9rl\" (UniqueName: \"kubernetes.io/projected/0a557547-de25-4165-a4f5-370b54cd7f70-kube-api-access-xv9rl\") pod \"node-ca-r6jcs\" (UID: \"0a557547-de25-4165-a4f5-370b54cd7f70\") " pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:13.165530 master-0 kubenswrapper[19803]: I0313 01:18:13.162444 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-r6jcs" Mar 13 01:18:13.337004 master-0 kubenswrapper[19803]: I0313 01:18:13.336777 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-tls\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:13.346530 master-0 kubenswrapper[19803]: I0313 01:18:13.341861 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-exporter-tls\" (UniqueName: \"kubernetes.io/secret/d948e0c5-a593-4fe0-bc58-8f157cd5ae1b-node-exporter-tls\") pod \"node-exporter-85xmz\" (UID: \"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b\") " pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:13.417483 master-0 kubenswrapper[19803]: I0313 01:18:13.417355 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5"] Mar 13 01:18:13.425208 master-0 kubenswrapper[19803]: W0313 01:18:13.425148 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5e147d06_d872_4691_95f8_b9d8b7584780.slice/crio-be8933e83162d580ad05aa7efd8094c9ad4d30f3b057c8d77c47edbf49c12d2e WatchSource:0}: Error finding container be8933e83162d580ad05aa7efd8094c9ad4d30f3b057c8d77c47edbf49c12d2e: Status 404 returned error can't find the container with id be8933e83162d580ad05aa7efd8094c9ad4d30f3b057c8d77c47edbf49c12d2e Mar 13 01:18:13.510619 master-0 kubenswrapper[19803]: I0313 01:18:13.510541 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/node-exporter-85xmz" Mar 13 01:18:13.519866 master-0 kubenswrapper[19803]: I0313 01:18:13.517008 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf"] Mar 13 01:18:13.547565 master-0 kubenswrapper[19803]: W0313 01:18:13.540637 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1ef69514_736d_44ba_a5e9_703bd06d52a8.slice/crio-9830520e2a717d793a802525fb378baf08be36c6e9ea859e37434aa7637af19a WatchSource:0}: Error finding container 9830520e2a717d793a802525fb378baf08be36c6e9ea859e37434aa7637af19a: Status 404 returned error can't find the container with id 9830520e2a717d793a802525fb378baf08be36c6e9ea859e37434aa7637af19a Mar 13 01:18:13.565529 master-0 kubenswrapper[19803]: W0313 01:18:13.565311 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd948e0c5_a593_4fe0_bc58_8f157cd5ae1b.slice/crio-8f3284b2d2f9dec0d1a60ca5e09f2a53561045bfce68a1bd09be2f47615a155a WatchSource:0}: Error finding container 8f3284b2d2f9dec0d1a60ca5e09f2a53561045bfce68a1bd09be2f47615a155a: Status 404 returned error can't find the container with id 8f3284b2d2f9dec0d1a60ca5e09f2a53561045bfce68a1bd09be2f47615a155a Mar 13 01:18:13.808403 master-0 kubenswrapper[19803]: I0313 01:18:13.808349 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 01:18:13.810487 master-0 kubenswrapper[19803]: I0313 01:18:13.810452 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.848734 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.848866 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wq6hg" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.849027 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.849118 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.849301 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.849453 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.849778 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.850520 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.850668 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 01:18:13.855636 master-0 kubenswrapper[19803]: I0313 01:18:13.852729 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 01:18:13.917460 master-0 kubenswrapper[19803]: I0313 01:18:13.913558 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6jcs" event={"ID":"0a557547-de25-4165-a4f5-370b54cd7f70","Type":"ContainerStarted","Data":"29a498e27d7c5f0fdb77a99b5d9b25966239d49083482cad3681382dc75208d8"} Mar 13 01:18:13.917460 master-0 kubenswrapper[19803]: I0313 01:18:13.915343 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" event={"ID":"1ef69514-736d-44ba-a5e9-703bd06d52a8","Type":"ContainerStarted","Data":"9830520e2a717d793a802525fb378baf08be36c6e9ea859e37434aa7637af19a"} Mar 13 01:18:13.917460 master-0 kubenswrapper[19803]: I0313 01:18:13.916166 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-85xmz" event={"ID":"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b","Type":"ContainerStarted","Data":"8f3284b2d2f9dec0d1a60ca5e09f2a53561045bfce68a1bd09be2f47615a155a"} Mar 13 01:18:13.923608 master-0 kubenswrapper[19803]: I0313 01:18:13.919650 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" event={"ID":"5e147d06-d872-4691-95f8-b9d8b7584780","Type":"ContainerStarted","Data":"0078f18da5e4a70a9062bdfa78a339661c5f81ddc0d637df46d3f74d74d435ac"} Mar 13 01:18:13.923608 master-0 kubenswrapper[19803]: I0313 01:18:13.919744 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" event={"ID":"5e147d06-d872-4691-95f8-b9d8b7584780","Type":"ContainerStarted","Data":"b7a74a5b38a904b7a946938b6493ba3b3d2a73a5b5819b6f52ef2a6023388e41"} Mar 13 01:18:13.923608 master-0 kubenswrapper[19803]: I0313 01:18:13.919757 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" event={"ID":"5e147d06-d872-4691-95f8-b9d8b7584780","Type":"ContainerStarted","Data":"be8933e83162d580ad05aa7efd8094c9ad4d30f3b057c8d77c47edbf49c12d2e"} Mar 13 01:18:13.953864 master-0 kubenswrapper[19803]: I0313 01:18:13.953683 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-web-config\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.953864 master-0 kubenswrapper[19803]: I0313 01:18:13.953763 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.953864 master-0 kubenswrapper[19803]: I0313 01:18:13.953797 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.953864 master-0 kubenswrapper[19803]: I0313 01:18:13.953821 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.953890 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.953924 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zmgm\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-kube-api-access-2zmgm\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.953973 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-config-volume\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.953998 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.954027 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.954068 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.954096 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:13.954319 master-0 kubenswrapper[19803]: I0313 01:18:13.954120 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-config-out\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055251 master-0 kubenswrapper[19803]: I0313 01:18:14.055195 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2zmgm\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-kube-api-access-2zmgm\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055251 master-0 kubenswrapper[19803]: I0313 01:18:14.055274 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-config-volume\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055300 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055327 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055353 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055381 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055403 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-config-out\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055435 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-web-config\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055454 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055477 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055494 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.055630 master-0 kubenswrapper[19803]: I0313 01:18:14.055556 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.058827 master-0 kubenswrapper[19803]: I0313 01:18:14.056852 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.058827 master-0 kubenswrapper[19803]: E0313 01:18:14.057625 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:18:14.55761077 +0000 UTC m=+42.522758449 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:14.058827 master-0 kubenswrapper[19803]: I0313 01:18:14.058244 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.059313 master-0 kubenswrapper[19803]: I0313 01:18:14.059266 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-config-out\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.064909 master-0 kubenswrapper[19803]: I0313 01:18:14.059791 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.067228 master-0 kubenswrapper[19803]: I0313 01:18:14.066049 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-web-config\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.077659 master-0 kubenswrapper[19803]: I0313 01:18:14.067476 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.077659 master-0 kubenswrapper[19803]: I0313 01:18:14.067476 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-config-volume\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.077659 master-0 kubenswrapper[19803]: I0313 01:18:14.067489 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.077659 master-0 kubenswrapper[19803]: I0313 01:18:14.069597 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.077659 master-0 kubenswrapper[19803]: I0313 01:18:14.071482 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-tls-assets\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:14.083037 master-0 kubenswrapper[19803]: I0313 01:18:14.080299 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2zmgm\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-kube-api-access-2zmgm\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:15.265609 master-0 kubenswrapper[19803]: I0313 01:18:15.264697 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:15.265609 master-0 kubenswrapper[19803]: E0313 01:18:15.265142 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:18:16.265119307 +0000 UTC m=+44.230266986 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:15.442608 master-0 kubenswrapper[19803]: I0313 01:18:15.440242 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/thanos-querier-5dc6c54498-5n2tv"] Mar 13 01:18:15.447876 master-0 kubenswrapper[19803]: I0313 01:18:15.447824 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.462770 master-0 kubenswrapper[19803]: I0313 01:18:15.459437 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 01:18:15.462770 master-0 kubenswrapper[19803]: I0313 01:18:15.459711 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 01:18:15.462770 master-0 kubenswrapper[19803]: I0313 01:18:15.459811 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-6mpzn" Mar 13 01:18:15.462770 master-0 kubenswrapper[19803]: I0313 01:18:15.459863 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 01:18:15.462770 master-0 kubenswrapper[19803]: I0313 01:18:15.459933 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 01:18:15.462770 master-0 kubenswrapper[19803]: I0313 01:18:15.460008 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-fpcr0ruobri08" Mar 13 01:18:15.462770 master-0 kubenswrapper[19803]: I0313 01:18:15.460109 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467177 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467282 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh86q\" (UniqueName: \"kubernetes.io/projected/77b804a1-c0fb-42d6-bdea-b879db3eb94c-kube-api-access-zh86q\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467316 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467422 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-grpc-tls\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467487 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-tls\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467530 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/77b804a1-c0fb-42d6-bdea-b879db3eb94c-metrics-client-ca\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467603 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.469098 master-0 kubenswrapper[19803]: I0313 01:18:15.467709 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.471638 master-0 kubenswrapper[19803]: I0313 01:18:15.471573 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5dc6c54498-5n2tv"] Mar 13 01:18:15.569325 master-0 kubenswrapper[19803]: I0313 01:18:15.569181 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-tls\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.569325 master-0 kubenswrapper[19803]: I0313 01:18:15.569253 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/77b804a1-c0fb-42d6-bdea-b879db3eb94c-metrics-client-ca\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.570284 master-0 kubenswrapper[19803]: I0313 01:18:15.570214 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.570641 master-0 kubenswrapper[19803]: I0313 01:18:15.570590 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.570719 master-0 kubenswrapper[19803]: I0313 01:18:15.570695 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.570791 master-0 kubenswrapper[19803]: I0313 01:18:15.570773 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zh86q\" (UniqueName: \"kubernetes.io/projected/77b804a1-c0fb-42d6-bdea-b879db3eb94c-kube-api-access-zh86q\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.570849 master-0 kubenswrapper[19803]: I0313 01:18:15.570825 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.570990 master-0 kubenswrapper[19803]: I0313 01:18:15.570919 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-grpc-tls\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.571238 master-0 kubenswrapper[19803]: I0313 01:18:15.571187 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/77b804a1-c0fb-42d6-bdea-b879db3eb94c-metrics-client-ca\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.574306 master-0 kubenswrapper[19803]: I0313 01:18:15.574266 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.577063 master-0 kubenswrapper[19803]: I0313 01:18:15.577026 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-grpc-tls\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.578913 master-0 kubenswrapper[19803]: I0313 01:18:15.578842 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-web\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.580388 master-0 kubenswrapper[19803]: I0313 01:18:15.580360 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-metrics\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-metrics\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.590498 master-0 kubenswrapper[19803]: I0313 01:18:15.586088 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-tls\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-tls\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.590498 master-0 kubenswrapper[19803]: I0313 01:18:15.586173 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-thanos-querier-kube-rbac-proxy-rules\" (UniqueName: \"kubernetes.io/secret/77b804a1-c0fb-42d6-bdea-b879db3eb94c-secret-thanos-querier-kube-rbac-proxy-rules\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.594620 master-0 kubenswrapper[19803]: I0313 01:18:15.594566 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh86q\" (UniqueName: \"kubernetes.io/projected/77b804a1-c0fb-42d6-bdea-b879db3eb94c-kube-api-access-zh86q\") pod \"thanos-querier-5dc6c54498-5n2tv\" (UID: \"77b804a1-c0fb-42d6-bdea-b879db3eb94c\") " pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:15.795015 master-0 kubenswrapper[19803]: I0313 01:18:15.794537 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:16.297062 master-0 kubenswrapper[19803]: I0313 01:18:16.296997 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:16.297585 master-0 kubenswrapper[19803]: E0313 01:18:16.297285 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:18:18.297249919 +0000 UTC m=+46.262397598 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:17.728627 master-0 kubenswrapper[19803]: I0313 01:18:17.728561 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/thanos-querier-5dc6c54498-5n2tv"] Mar 13 01:18:17.776529 master-0 kubenswrapper[19803]: W0313 01:18:17.776455 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod77b804a1_c0fb_42d6_bdea_b879db3eb94c.slice/crio-bd0d42c72b176ac502b028170d161647324831dcbf6aa55f022ab2b247d3052f WatchSource:0}: Error finding container bd0d42c72b176ac502b028170d161647324831dcbf6aa55f022ab2b247d3052f: Status 404 returned error can't find the container with id bd0d42c72b176ac502b028170d161647324831dcbf6aa55f022ab2b247d3052f Mar 13 01:18:18.345822 master-0 kubenswrapper[19803]: I0313 01:18:18.345750 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:18.346100 master-0 kubenswrapper[19803]: E0313 01:18:18.346044 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:18:22.346014978 +0000 UTC m=+50.311162657 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:18.353697 master-0 kubenswrapper[19803]: I0313 01:18:18.353616 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" event={"ID":"1ef69514-736d-44ba-a5e9-703bd06d52a8","Type":"ContainerStarted","Data":"8d5d01a170373ac8a2e3038a8204e315e44a13642ab117a6eade79c75b426cb0"} Mar 13 01:18:18.353697 master-0 kubenswrapper[19803]: I0313 01:18:18.353678 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" event={"ID":"1ef69514-736d-44ba-a5e9-703bd06d52a8","Type":"ContainerStarted","Data":"a7cfdea57e2ccdae933a2414a2e9bbdda1c113d1145051844794842e3e3be637"} Mar 13 01:18:18.353697 master-0 kubenswrapper[19803]: I0313 01:18:18.353691 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" event={"ID":"1ef69514-736d-44ba-a5e9-703bd06d52a8","Type":"ContainerStarted","Data":"5b7d22834433b941f1a8c26b4c9992a7b3982b490bfc23fb2ea96a14e80610af"} Mar 13 01:18:18.355736 master-0 kubenswrapper[19803]: I0313 01:18:18.355676 19803 generic.go:334] "Generic (PLEG): container finished" podID="d948e0c5-a593-4fe0-bc58-8f157cd5ae1b" containerID="8e73ed106a2e2bbad5910ec232e718067f66f1198d8abcec1d1ad5cc5b1f3ca0" exitCode=0 Mar 13 01:18:18.355869 master-0 kubenswrapper[19803]: I0313 01:18:18.355810 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-85xmz" event={"ID":"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b","Type":"ContainerDied","Data":"8e73ed106a2e2bbad5910ec232e718067f66f1198d8abcec1d1ad5cc5b1f3ca0"} Mar 13 01:18:18.360726 master-0 kubenswrapper[19803]: I0313 01:18:18.360658 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" event={"ID":"5e147d06-d872-4691-95f8-b9d8b7584780","Type":"ContainerStarted","Data":"30dc8e7402566e502908211ab2508ed405cb14ef980531ecfb394a9fd9725fce"} Mar 13 01:18:18.365278 master-0 kubenswrapper[19803]: I0313 01:18:18.365230 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" event={"ID":"77b804a1-c0fb-42d6-bdea-b879db3eb94c","Type":"ContainerStarted","Data":"bd0d42c72b176ac502b028170d161647324831dcbf6aa55f022ab2b247d3052f"} Mar 13 01:18:18.367181 master-0 kubenswrapper[19803]: I0313 01:18:18.367134 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-r6jcs" event={"ID":"0a557547-de25-4165-a4f5-370b54cd7f70","Type":"ContainerStarted","Data":"1b12b9a5ed1a115ebb17283f6a7cb075d859b01df125cde0eafdcd520776d85f"} Mar 13 01:18:18.371055 master-0 kubenswrapper[19803]: I0313 01:18:18.371003 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8"] Mar 13 01:18:18.372031 master-0 kubenswrapper[19803]: I0313 01:18:18.372000 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" Mar 13 01:18:18.374555 master-0 kubenswrapper[19803]: I0313 01:18:18.374499 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-n8jpw" Mar 13 01:18:18.374616 master-0 kubenswrapper[19803]: I0313 01:18:18.374593 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 01:18:18.388733 master-0 kubenswrapper[19803]: I0313 01:18:18.388675 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8"] Mar 13 01:18:18.389940 master-0 kubenswrapper[19803]: I0313 01:18:18.389870 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/kube-state-metrics-68b88f8cb5-6w4pf" podStartSLOduration=2.725709932 podStartE2EDuration="6.389848327s" podCreationTimestamp="2026-03-13 01:18:12 +0000 UTC" firstStartedPulling="2026-03-13 01:18:13.550898385 +0000 UTC m=+41.516046054" lastFinishedPulling="2026-03-13 01:18:17.21503676 +0000 UTC m=+45.180184449" observedRunningTime="2026-03-13 01:18:18.385103412 +0000 UTC m=+46.350251091" watchObservedRunningTime="2026-03-13 01:18:18.389848327 +0000 UTC m=+46.354996006" Mar 13 01:18:18.455600 master-0 kubenswrapper[19803]: I0313 01:18:18.452810 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/69519a11-aa5e-40e5-a655-992d32ef8150-monitoring-plugin-cert\") pod \"monitoring-plugin-6d885bb797-8nsd8\" (UID: \"69519a11-aa5e-40e5-a655-992d32ef8150\") " pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" Mar 13 01:18:18.488559 master-0 kubenswrapper[19803]: I0313 01:18:18.486120 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/metrics-server-8d4f75c74-k5jnm"] Mar 13 01:18:18.488559 master-0 kubenswrapper[19803]: I0313 01:18:18.487369 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.497440 master-0 kubenswrapper[19803]: I0313 01:18:18.496999 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-n5nfx" Mar 13 01:18:18.497440 master-0 kubenswrapper[19803]: I0313 01:18:18.497287 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 01:18:18.497681 master-0 kubenswrapper[19803]: I0313 01:18:18.497557 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 01:18:18.503390 master-0 kubenswrapper[19803]: I0313 01:18:18.503119 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 01:18:18.506850 master-0 kubenswrapper[19803]: I0313 01:18:18.503487 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 01:18:18.506850 master-0 kubenswrapper[19803]: I0313 01:18:18.503849 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5hmj8ip2t2ob4" Mar 13 01:18:18.506850 master-0 kubenswrapper[19803]: I0313 01:18:18.504603 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-r6jcs" podStartSLOduration=2.482668943 podStartE2EDuration="6.504567487s" podCreationTimestamp="2026-03-13 01:18:12 +0000 UTC" firstStartedPulling="2026-03-13 01:18:13.192146482 +0000 UTC m=+41.157294161" lastFinishedPulling="2026-03-13 01:18:17.214045026 +0000 UTC m=+45.179192705" observedRunningTime="2026-03-13 01:18:18.498283355 +0000 UTC m=+46.463431034" watchObservedRunningTime="2026-03-13 01:18:18.504567487 +0000 UTC m=+46.469715176" Mar 13 01:18:18.525247 master-0 kubenswrapper[19803]: I0313 01:18:18.524437 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-8d4f75c74-k5jnm"] Mar 13 01:18:18.545377 master-0 kubenswrapper[19803]: I0313 01:18:18.545205 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/openshift-state-metrics-74cc79fd76-fpgj5" podStartSLOduration=3.073722636 podStartE2EDuration="6.545169947s" podCreationTimestamp="2026-03-13 01:18:12 +0000 UTC" firstStartedPulling="2026-03-13 01:18:13.741330294 +0000 UTC m=+41.706477963" lastFinishedPulling="2026-03-13 01:18:17.212777595 +0000 UTC m=+45.177925274" observedRunningTime="2026-03-13 01:18:18.534704044 +0000 UTC m=+46.499851723" watchObservedRunningTime="2026-03-13 01:18:18.545169947 +0000 UTC m=+46.510317626" Mar 13 01:18:18.562037 master-0 kubenswrapper[19803]: I0313 01:18:18.561883 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5f8427fc-c594-4f19-9ef4-af196da1166e-audit-log\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.562037 master-0 kubenswrapper[19803]: I0313 01:18:18.561959 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/69519a11-aa5e-40e5-a655-992d32ef8150-monitoring-plugin-cert\") pod \"monitoring-plugin-6d885bb797-8nsd8\" (UID: \"69519a11-aa5e-40e5-a655-992d32ef8150\") " pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" Mar 13 01:18:18.562037 master-0 kubenswrapper[19803]: I0313 01:18:18.562003 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f8427fc-c594-4f19-9ef4-af196da1166e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.562232 master-0 kubenswrapper[19803]: I0313 01:18:18.562043 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-secret-metrics-server-tls\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.562232 master-0 kubenswrapper[19803]: I0313 01:18:18.562127 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5f8427fc-c594-4f19-9ef4-af196da1166e-metrics-server-audit-profiles\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.562232 master-0 kubenswrapper[19803]: I0313 01:18:18.562187 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hztrs\" (UniqueName: \"kubernetes.io/projected/5f8427fc-c594-4f19-9ef4-af196da1166e-kube-api-access-hztrs\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.562232 master-0 kubenswrapper[19803]: I0313 01:18:18.562224 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-secret-metrics-client-certs\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.562346 master-0 kubenswrapper[19803]: I0313 01:18:18.562272 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-client-ca-bundle\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.578888 master-0 kubenswrapper[19803]: I0313 01:18:18.578844 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"monitoring-plugin-cert\" (UniqueName: \"kubernetes.io/secret/69519a11-aa5e-40e5-a655-992d32ef8150-monitoring-plugin-cert\") pod \"monitoring-plugin-6d885bb797-8nsd8\" (UID: \"69519a11-aa5e-40e5-a655-992d32ef8150\") " pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" Mar 13 01:18:18.664116 master-0 kubenswrapper[19803]: I0313 01:18:18.664034 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5f8427fc-c594-4f19-9ef4-af196da1166e-audit-log\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.664196 master-0 kubenswrapper[19803]: I0313 01:18:18.664150 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f8427fc-c594-4f19-9ef4-af196da1166e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.664639 master-0 kubenswrapper[19803]: I0313 01:18:18.664585 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-secret-metrics-server-tls\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.664758 master-0 kubenswrapper[19803]: I0313 01:18:18.664735 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5f8427fc-c594-4f19-9ef4-af196da1166e-metrics-server-audit-profiles\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.665107 master-0 kubenswrapper[19803]: I0313 01:18:18.665063 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-log\" (UniqueName: \"kubernetes.io/empty-dir/5f8427fc-c594-4f19-9ef4-af196da1166e-audit-log\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.665377 master-0 kubenswrapper[19803]: I0313 01:18:18.665336 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hztrs\" (UniqueName: \"kubernetes.io/projected/5f8427fc-c594-4f19-9ef4-af196da1166e-kube-api-access-hztrs\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.665443 master-0 kubenswrapper[19803]: I0313 01:18:18.665424 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-secret-metrics-client-certs\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.665571 master-0 kubenswrapper[19803]: I0313 01:18:18.665544 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-client-ca-bundle\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.667686 master-0 kubenswrapper[19803]: I0313 01:18:18.667648 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f8427fc-c594-4f19-9ef4-af196da1166e-configmap-kubelet-serving-ca-bundle\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.668412 master-0 kubenswrapper[19803]: I0313 01:18:18.668373 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-server-audit-profiles\" (UniqueName: \"kubernetes.io/configmap/5f8427fc-c594-4f19-9ef4-af196da1166e-metrics-server-audit-profiles\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.670005 master-0 kubenswrapper[19803]: I0313 01:18:18.669959 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-server-tls\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-secret-metrics-server-tls\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.675664 master-0 kubenswrapper[19803]: I0313 01:18:18.675398 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-secret-metrics-client-certs\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.676867 master-0 kubenswrapper[19803]: I0313 01:18:18.676832 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5f8427fc-c594-4f19-9ef4-af196da1166e-client-ca-bundle\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.682803 master-0 kubenswrapper[19803]: I0313 01:18:18.682761 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hztrs\" (UniqueName: \"kubernetes.io/projected/5f8427fc-c594-4f19-9ef4-af196da1166e-kube-api-access-hztrs\") pod \"metrics-server-8d4f75c74-k5jnm\" (UID: \"5f8427fc-c594-4f19-9ef4-af196da1166e\") " pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:18.688760 master-0 kubenswrapper[19803]: I0313 01:18:18.688715 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" Mar 13 01:18:18.841689 master-0 kubenswrapper[19803]: I0313 01:18:18.841628 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:19.092459 master-0 kubenswrapper[19803]: I0313 01:18:19.092167 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8"] Mar 13 01:18:19.103729 master-0 kubenswrapper[19803]: W0313 01:18:19.103673 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod69519a11_aa5e_40e5_a655_992d32ef8150.slice/crio-c58b1d44b8197985978d131461f6be2305095ef4c7b3e29df156acb82ba4fa1b WatchSource:0}: Error finding container c58b1d44b8197985978d131461f6be2305095ef4c7b3e29df156acb82ba4fa1b: Status 404 returned error can't find the container with id c58b1d44b8197985978d131461f6be2305095ef4c7b3e29df156acb82ba4fa1b Mar 13 01:18:19.263591 master-0 kubenswrapper[19803]: I0313 01:18:19.263500 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/metrics-server-8d4f75c74-k5jnm"] Mar 13 01:18:19.381019 master-0 kubenswrapper[19803]: I0313 01:18:19.380933 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" event={"ID":"69519a11-aa5e-40e5-a655-992d32ef8150","Type":"ContainerStarted","Data":"c58b1d44b8197985978d131461f6be2305095ef4c7b3e29df156acb82ba4fa1b"} Mar 13 01:18:19.384689 master-0 kubenswrapper[19803]: I0313 01:18:19.384615 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-85xmz" event={"ID":"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b","Type":"ContainerStarted","Data":"88eb2f87428e8372ff64467a55aaf762ec8ab5d28e77b46410caf0e8b414898d"} Mar 13 01:18:19.384689 master-0 kubenswrapper[19803]: I0313 01:18:19.384662 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/node-exporter-85xmz" event={"ID":"d948e0c5-a593-4fe0-bc58-8f157cd5ae1b","Type":"ContainerStarted","Data":"8a2fb1d8180973e91759a671fd1a93f8451a05bd9404af50f0a90e61566ab8d7"} Mar 13 01:18:19.386042 master-0 kubenswrapper[19803]: I0313 01:18:19.385957 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" event={"ID":"5f8427fc-c594-4f19-9ef4-af196da1166e","Type":"ContainerStarted","Data":"9dff49378390c6416c770e891e87f8f8ee201130e78fca139ff41c8789c61aa4"} Mar 13 01:18:20.212104 master-0 kubenswrapper[19803]: I0313 01:18:20.212021 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/node-exporter-85xmz" podStartSLOduration=4.576675635 podStartE2EDuration="8.212002994s" podCreationTimestamp="2026-03-13 01:18:12 +0000 UTC" firstStartedPulling="2026-03-13 01:18:13.575666973 +0000 UTC m=+41.540814652" lastFinishedPulling="2026-03-13 01:18:17.210994312 +0000 UTC m=+45.176142011" observedRunningTime="2026-03-13 01:18:19.424216282 +0000 UTC m=+47.389363961" watchObservedRunningTime="2026-03-13 01:18:20.212002994 +0000 UTC m=+48.177150673" Mar 13 01:18:20.212870 master-0 kubenswrapper[19803]: I0313 01:18:20.212564 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 01:18:20.222759 master-0 kubenswrapper[19803]: I0313 01:18:20.222665 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.227533 master-0 kubenswrapper[19803]: I0313 01:18:20.227431 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 01:18:20.227865 master-0 kubenswrapper[19803]: I0313 01:18:20.227800 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 01:18:20.227922 master-0 kubenswrapper[19803]: I0313 01:18:20.227886 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-ftucsgvpi0546" Mar 13 01:18:20.228542 master-0 kubenswrapper[19803]: I0313 01:18:20.228010 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 01:18:20.228542 master-0 kubenswrapper[19803]: I0313 01:18:20.228111 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 01:18:20.228542 master-0 kubenswrapper[19803]: I0313 01:18:20.228203 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 01:18:20.228542 master-0 kubenswrapper[19803]: I0313 01:18:20.228248 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 01:18:20.228542 master-0 kubenswrapper[19803]: I0313 01:18:20.228363 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 01:18:20.228542 master-0 kubenswrapper[19803]: I0313 01:18:20.228481 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 01:18:20.228740 master-0 kubenswrapper[19803]: I0313 01:18:20.228638 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 01:18:20.230283 master-0 kubenswrapper[19803]: I0313 01:18:20.228999 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 01:18:20.230283 master-0 kubenswrapper[19803]: I0313 01:18:20.229148 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-f4w47" Mar 13 01:18:20.237976 master-0 kubenswrapper[19803]: I0313 01:18:20.237932 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 01:18:20.276492 master-0 kubenswrapper[19803]: I0313 01:18:20.276156 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 01:18:20.321080 master-0 kubenswrapper[19803]: I0313 01:18:20.320988 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321104 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321168 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321201 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321226 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321260 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321300 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-web-config\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321332 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321353 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321390 master-0 kubenswrapper[19803]: I0313 01:18:20.321399 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321428 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config-out\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321475 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321503 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321554 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321579 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gws49\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-kube-api-access-gws49\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321624 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321644 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.321668 master-0 kubenswrapper[19803]: I0313 01:18:20.321663 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.425566 master-0 kubenswrapper[19803]: I0313 01:18:20.425468 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.425566 master-0 kubenswrapper[19803]: I0313 01:18:20.425555 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gws49\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-kube-api-access-gws49\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425610 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425639 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425673 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425718 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425755 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425806 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425830 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425852 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425876 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425905 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-web-config\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425936 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.425960 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.426005 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.426037 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config-out\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.426054 master-0 kubenswrapper[19803]: I0313 01:18:20.426071 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.427905 master-0 kubenswrapper[19803]: I0313 01:18:20.426105 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.436651 master-0 kubenswrapper[19803]: I0313 01:18:20.429861 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.436651 master-0 kubenswrapper[19803]: I0313 01:18:20.430087 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.436651 master-0 kubenswrapper[19803]: I0313 01:18:20.430727 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.436651 master-0 kubenswrapper[19803]: I0313 01:18:20.432554 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.437192 master-0 kubenswrapper[19803]: I0313 01:18:20.436769 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config-out\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.441166 master-0 kubenswrapper[19803]: I0313 01:18:20.441113 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.441518 master-0 kubenswrapper[19803]: I0313 01:18:20.441476 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.443779 master-0 kubenswrapper[19803]: I0313 01:18:20.443727 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.444044 master-0 kubenswrapper[19803]: E0313 01:18:20.444007 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:20.943986216 +0000 UTC m=+48.909133895 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:20.444833 master-0 kubenswrapper[19803]: I0313 01:18:20.444686 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.450045 master-0 kubenswrapper[19803]: I0313 01:18:20.449983 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-web-config\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.450606 master-0 kubenswrapper[19803]: I0313 01:18:20.450275 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.450606 master-0 kubenswrapper[19803]: I0313 01:18:20.450391 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.451632 master-0 kubenswrapper[19803]: I0313 01:18:20.451592 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.451886 master-0 kubenswrapper[19803]: I0313 01:18:20.451856 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.452034 master-0 kubenswrapper[19803]: I0313 01:18:20.452000 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.452393 master-0 kubenswrapper[19803]: I0313 01:18:20.452354 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:20.521500 master-0 kubenswrapper[19803]: I0313 01:18:20.521445 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gws49\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-kube-api-access-gws49\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:21.036857 master-0 kubenswrapper[19803]: I0313 01:18:21.036800 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:21.037078 master-0 kubenswrapper[19803]: E0313 01:18:21.037040 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:22.037011775 +0000 UTC m=+50.002159524 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:22.064424 master-0 kubenswrapper[19803]: I0313 01:18:22.064234 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:22.064424 master-0 kubenswrapper[19803]: E0313 01:18:22.064408 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:24.064382842 +0000 UTC m=+52.029530521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:22.421464 master-0 kubenswrapper[19803]: I0313 01:18:22.421368 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:22.421718 master-0 kubenswrapper[19803]: E0313 01:18:22.421684 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:18:30.421640088 +0000 UTC m=+58.386787777 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:23.336009 master-0 kubenswrapper[19803]: I0313 01:18:23.335948 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:18:23.337256 master-0 kubenswrapper[19803]: E0313 01:18:23.336207 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:18:55.336173891 +0000 UTC m=+83.301321610 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:23.421660 master-0 kubenswrapper[19803]: I0313 01:18:23.421596 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" event={"ID":"69519a11-aa5e-40e5-a655-992d32ef8150","Type":"ContainerStarted","Data":"1e71d621a78d465a8fef806a2612a7eb29c5bf31e23f7fa295c322e06867433a"} Mar 13 01:18:23.421993 master-0 kubenswrapper[19803]: I0313 01:18:23.421961 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" Mar 13 01:18:23.425961 master-0 kubenswrapper[19803]: I0313 01:18:23.425905 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" event={"ID":"5f8427fc-c594-4f19-9ef4-af196da1166e","Type":"ContainerStarted","Data":"4ba35b446f83c7e837fec5e01ceb3869b5aede25e4df46f185af58226a1ee6f9"} Mar 13 01:18:23.429600 master-0 kubenswrapper[19803]: I0313 01:18:23.429002 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" event={"ID":"77b804a1-c0fb-42d6-bdea-b879db3eb94c","Type":"ContainerStarted","Data":"ce51d1da0903a802b97d3fd003d877f27b9682f66259ca3b5e137685d87f6bd5"} Mar 13 01:18:23.429600 master-0 kubenswrapper[19803]: I0313 01:18:23.429051 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" event={"ID":"77b804a1-c0fb-42d6-bdea-b879db3eb94c","Type":"ContainerStarted","Data":"48a8fe0b465a831dcb32120fdc85b04ada7e103ff5bccc1ef0b5e2678b983f05"} Mar 13 01:18:23.429600 master-0 kubenswrapper[19803]: I0313 01:18:23.429066 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" event={"ID":"77b804a1-c0fb-42d6-bdea-b879db3eb94c","Type":"ContainerStarted","Data":"4e77f3f9ddac1fb95a858bfb93a4cc668d1cb876c1dab3c1914f419764e41cf7"} Mar 13 01:18:23.442040 master-0 kubenswrapper[19803]: I0313 01:18:23.441941 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" podStartSLOduration=2.233914933 podStartE2EDuration="5.441921925s" podCreationTimestamp="2026-03-13 01:18:18 +0000 UTC" firstStartedPulling="2026-03-13 01:18:19.107848993 +0000 UTC m=+47.072996692" lastFinishedPulling="2026-03-13 01:18:22.315856005 +0000 UTC m=+50.281003684" observedRunningTime="2026-03-13 01:18:23.43968473 +0000 UTC m=+51.404832429" watchObservedRunningTime="2026-03-13 01:18:23.441921925 +0000 UTC m=+51.407069604" Mar 13 01:18:23.442426 master-0 kubenswrapper[19803]: I0313 01:18:23.442375 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/monitoring-plugin-6d885bb797-8nsd8" Mar 13 01:18:23.465543 master-0 kubenswrapper[19803]: I0313 01:18:23.465428 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" podStartSLOduration=1.608275678 podStartE2EDuration="5.465408412s" podCreationTimestamp="2026-03-13 01:18:18 +0000 UTC" firstStartedPulling="2026-03-13 01:18:19.281683191 +0000 UTC m=+47.246830870" lastFinishedPulling="2026-03-13 01:18:23.138815915 +0000 UTC m=+51.103963604" observedRunningTime="2026-03-13 01:18:23.463443714 +0000 UTC m=+51.428591403" watchObservedRunningTime="2026-03-13 01:18:23.465408412 +0000 UTC m=+51.430556111" Mar 13 01:18:24.148477 master-0 kubenswrapper[19803]: I0313 01:18:24.148391 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:24.149130 master-0 kubenswrapper[19803]: E0313 01:18:24.148601 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:28.148572637 +0000 UTC m=+56.113720316 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:25.449484 master-0 kubenswrapper[19803]: I0313 01:18:25.449354 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" event={"ID":"77b804a1-c0fb-42d6-bdea-b879db3eb94c","Type":"ContainerStarted","Data":"04a45873d2111d08d481222a9ca741b4df8c9a943cf2e54c6458953addb019df"} Mar 13 01:18:25.449484 master-0 kubenswrapper[19803]: I0313 01:18:25.449413 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" event={"ID":"77b804a1-c0fb-42d6-bdea-b879db3eb94c","Type":"ContainerStarted","Data":"ad1b3c0fb91f6532da88699611c4fe2bdc9d1d833770f2bf56f8e44904ef8dc3"} Mar 13 01:18:25.449484 master-0 kubenswrapper[19803]: I0313 01:18:25.449427 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" event={"ID":"77b804a1-c0fb-42d6-bdea-b879db3eb94c","Type":"ContainerStarted","Data":"7680b5d235efbf9c0b31d8bcd4d416338b526ba640e7debdb56854debe91ed87"} Mar 13 01:18:25.450303 master-0 kubenswrapper[19803]: I0313 01:18:25.449505 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:25.483082 master-0 kubenswrapper[19803]: I0313 01:18:25.482975 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" podStartSLOduration=3.810028203 podStartE2EDuration="10.482955438s" podCreationTimestamp="2026-03-13 01:18:15 +0000 UTC" firstStartedPulling="2026-03-13 01:18:17.780428431 +0000 UTC m=+45.745576110" lastFinishedPulling="2026-03-13 01:18:24.453355666 +0000 UTC m=+52.418503345" observedRunningTime="2026-03-13 01:18:25.482145028 +0000 UTC m=+53.447292717" watchObservedRunningTime="2026-03-13 01:18:25.482955438 +0000 UTC m=+53.448103137" Mar 13 01:18:28.241505 master-0 kubenswrapper[19803]: I0313 01:18:28.238742 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:28.241505 master-0 kubenswrapper[19803]: E0313 01:18:28.239373 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:36.239337193 +0000 UTC m=+64.204484912 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:30.475840 master-0 kubenswrapper[19803]: I0313 01:18:30.475767 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:30.476481 master-0 kubenswrapper[19803]: E0313 01:18:30.476040 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:18:46.47601118 +0000 UTC m=+74.441158869 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:30.804141 master-0 kubenswrapper[19803]: I0313 01:18:30.804072 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/thanos-querier-5dc6c54498-5n2tv" Mar 13 01:18:32.264907 master-0 kubenswrapper[19803]: I0313 01:18:32.260533 19803 scope.go:117] "RemoveContainer" containerID="b6ea782ca75304abc2ccc9ab19e6d9b4a2889fe649ebf475c9c95d91d8dba102" Mar 13 01:18:32.470241 master-0 kubenswrapper[19803]: I0313 01:18:32.303182 19803 scope.go:117] "RemoveContainer" containerID="8df1059c68299a3330235cc4d111397a59bfb0c4b40d95af664427109c129231" Mar 13 01:18:32.496262 master-0 kubenswrapper[19803]: I0313 01:18:32.496213 19803 scope.go:117] "RemoveContainer" containerID="6c9bd5245949231d7973259139b8774c20bbb32018502eb3bd133d4e8aa89584" Mar 13 01:18:32.525529 master-0 kubenswrapper[19803]: I0313 01:18:32.525472 19803 scope.go:117] "RemoveContainer" containerID="53dcbd61cdb4ba2de960bb2099fda9de5cc31628732654b744e0b56ff9b97460" Mar 13 01:18:36.245558 master-0 kubenswrapper[19803]: I0313 01:18:36.245397 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:36.246563 master-0 kubenswrapper[19803]: E0313 01:18:36.246004 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:18:52.245968242 +0000 UTC m=+80.211115961 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:38.842814 master-0 kubenswrapper[19803]: I0313 01:18:38.842679 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:38.842814 master-0 kubenswrapper[19803]: I0313 01:18:38.842819 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:40.728391 master-0 kubenswrapper[19803]: I0313 01:18:40.728277 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:18:40.729012 master-0 kubenswrapper[19803]: I0313 01:18:40.728607 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:18:40.729012 master-0 kubenswrapper[19803]: E0313 01:18:40.728636 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729012 master-0 kubenswrapper[19803]: I0313 01:18:40.728665 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:18:40.729012 master-0 kubenswrapper[19803]: E0313 01:18:40.728686 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729012 master-0 kubenswrapper[19803]: E0313 01:18:40.728876 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:19:44.728843425 +0000 UTC m=+132.693991134 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729261 master-0 kubenswrapper[19803]: E0313 01:18:40.729036 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729261 master-0 kubenswrapper[19803]: E0313 01:18:40.729064 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729261 master-0 kubenswrapper[19803]: E0313 01:18:40.729124 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:19:44.729110072 +0000 UTC m=+132.694257781 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729261 master-0 kubenswrapper[19803]: E0313 01:18:40.729206 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729261 master-0 kubenswrapper[19803]: E0313 01:18:40.729222 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:18:40.729261 master-0 kubenswrapper[19803]: E0313 01:18:40.729258 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:19:44.729243875 +0000 UTC m=+132.694391584 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:18:46.540533 master-0 kubenswrapper[19803]: I0313 01:18:46.540446 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:18:46.541165 master-0 kubenswrapper[19803]: E0313 01:18:46.540670 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:19:18.540649448 +0000 UTC m=+106.505797127 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:52.262058 master-0 kubenswrapper[19803]: I0313 01:18:52.261938 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:18:52.263306 master-0 kubenswrapper[19803]: E0313 01:18:52.262354 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:19:24.262306883 +0000 UTC m=+112.227454592 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:55.368268 master-0 kubenswrapper[19803]: I0313 01:18:55.368176 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:18:55.369182 master-0 kubenswrapper[19803]: E0313 01:18:55.368405 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:19:59.368383642 +0000 UTC m=+147.333531321 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:18:58.852314 master-0 kubenswrapper[19803]: I0313 01:18:58.852193 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:18:58.859694 master-0 kubenswrapper[19803]: I0313 01:18:58.859650 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/metrics-server-8d4f75c74-k5jnm" Mar 13 01:19:18.632989 master-0 kubenswrapper[19803]: I0313 01:19:18.632917 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:19:18.633993 master-0 kubenswrapper[19803]: E0313 01:19:18.633150 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:20:22.633110003 +0000 UTC m=+170.598257682 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:19:24.332000 master-0 kubenswrapper[19803]: I0313 01:19:24.331854 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:19:24.333223 master-0 kubenswrapper[19803]: E0313 01:19:24.332195 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:20:28.332157473 +0000 UTC m=+176.297305192 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:19:24.517800 master-0 kubenswrapper[19803]: I0313 01:19:24.517691 19803 patch_prober.go:28] interesting pod/machine-config-daemon-fprhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 01:19:24.517800 master-0 kubenswrapper[19803]: I0313 01:19:24.517788 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" podUID="3418d0fb-d0ae-4634-a645-dc387a19147f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 01:19:44.802904 master-0 kubenswrapper[19803]: E0313 01:19:44.802795 19803 projected.go:288] Couldn't get configMap openshift-kube-scheduler/kube-root-ca.crt: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:19:44.802904 master-0 kubenswrapper[19803]: E0313 01:19:44.802892 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-scheduler/installer-4-master-0: object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:19:44.804232 master-0 kubenswrapper[19803]: E0313 01:19:44.802984 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access podName:7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90 nodeName:}" failed. No retries permitted until 2026-03-13 01:21:46.802955052 +0000 UTC m=+254.768102771 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access") pod "installer-4-master-0" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90") : object "openshift-kube-scheduler"/"kube-root-ca.crt" not registered Mar 13 01:19:44.804232 master-0 kubenswrapper[19803]: I0313 01:19:44.802585 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:19:44.804232 master-0 kubenswrapper[19803]: I0313 01:19:44.803765 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:19:44.804232 master-0 kubenswrapper[19803]: E0313 01:19:44.803907 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:19:44.804232 master-0 kubenswrapper[19803]: E0313 01:19:44.803929 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:19:44.804232 master-0 kubenswrapper[19803]: E0313 01:19:44.803991 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:21:46.803974107 +0000 UTC m=+254.769121826 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:19:44.804232 master-0 kubenswrapper[19803]: I0313 01:19:44.804219 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:19:44.804922 master-0 kubenswrapper[19803]: E0313 01:19:44.804394 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:19:44.804922 master-0 kubenswrapper[19803]: E0313 01:19:44.804414 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:19:44.804922 master-0 kubenswrapper[19803]: E0313 01:19:44.804467 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:21:46.804452358 +0000 UTC m=+254.769600077 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:19:54.486468 master-0 kubenswrapper[19803]: E0313 01:19:54.486331 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[trusted-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" podUID="a1d1a41c-8533-4854-abea-ed42c4d7c71f" Mar 13 01:19:54.517422 master-0 kubenswrapper[19803]: I0313 01:19:54.517358 19803 patch_prober.go:28] interesting pod/machine-config-daemon-fprhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 01:19:54.517790 master-0 kubenswrapper[19803]: I0313 01:19:54.517743 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" podUID="3418d0fb-d0ae-4634-a645-dc387a19147f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 01:19:55.211155 master-0 kubenswrapper[19803]: I0313 01:19:55.211061 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:19:59.392386 master-0 kubenswrapper[19803]: I0313 01:19:59.392255 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:19:59.393254 master-0 kubenswrapper[19803]: E0313 01:19:59.392652 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca podName:a1d1a41c-8533-4854-abea-ed42c4d7c71f nodeName:}" failed. No retries permitted until 2026-03-13 01:22:01.392608865 +0000 UTC m=+269.357756744 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "trusted-ca" (UniqueName: "kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca") pod "console-operator-6c7fb6b958-4cbn4" (UID: "a1d1a41c-8533-4854-abea-ed42c4d7c71f") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:20:16.907412 master-0 kubenswrapper[19803]: E0313 01:20:16.907293 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" Mar 13 01:20:17.411608 master-0 kubenswrapper[19803]: I0313 01:20:17.411488 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:20:22.675144 master-0 kubenswrapper[19803]: I0313 01:20:22.674833 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:20:22.675144 master-0 kubenswrapper[19803]: E0313 01:20:22.675146 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle podName:8b300a46-0e04-4109-a370-2589ce3efa0c nodeName:}" failed. No retries permitted until 2026-03-13 01:22:24.675107418 +0000 UTC m=+292.640255187 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "alertmanager-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle") pod "alertmanager-main-0" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:20:23.282433 master-0 kubenswrapper[19803]: E0313 01:20:23.282314 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" Mar 13 01:20:23.460151 master-0 kubenswrapper[19803]: I0313 01:20:23.459959 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:20:24.517171 master-0 kubenswrapper[19803]: I0313 01:20:24.517063 19803 patch_prober.go:28] interesting pod/machine-config-daemon-fprhw container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Mar 13 01:20:24.518473 master-0 kubenswrapper[19803]: I0313 01:20:24.517178 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" podUID="3418d0fb-d0ae-4634-a645-dc387a19147f" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Mar 13 01:20:24.518473 master-0 kubenswrapper[19803]: I0313 01:20:24.517265 19803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" Mar 13 01:20:24.518473 master-0 kubenswrapper[19803]: I0313 01:20:24.518238 19803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7bae26fbeb039bb89409ea2b07418b33a068c51b808317d7c8ef9c01bf69e60a"} pod="openshift-machine-config-operator/machine-config-daemon-fprhw" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Mar 13 01:20:24.518473 master-0 kubenswrapper[19803]: I0313 01:20:24.518447 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" podUID="3418d0fb-d0ae-4634-a645-dc387a19147f" containerName="machine-config-daemon" containerID="cri-o://7bae26fbeb039bb89409ea2b07418b33a068c51b808317d7c8ef9c01bf69e60a" gracePeriod=600 Mar 13 01:20:25.485049 master-0 kubenswrapper[19803]: I0313 01:20:25.484949 19803 generic.go:334] "Generic (PLEG): container finished" podID="3418d0fb-d0ae-4634-a645-dc387a19147f" containerID="7bae26fbeb039bb89409ea2b07418b33a068c51b808317d7c8ef9c01bf69e60a" exitCode=0 Mar 13 01:20:25.485049 master-0 kubenswrapper[19803]: I0313 01:20:25.485039 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" event={"ID":"3418d0fb-d0ae-4634-a645-dc387a19147f","Type":"ContainerDied","Data":"7bae26fbeb039bb89409ea2b07418b33a068c51b808317d7c8ef9c01bf69e60a"} Mar 13 01:20:25.485295 master-0 kubenswrapper[19803]: I0313 01:20:25.485091 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-fprhw" event={"ID":"3418d0fb-d0ae-4634-a645-dc387a19147f","Type":"ContainerStarted","Data":"7ea71878c0f2851bf2e33fc707a0a4bb208a5c63cb0f959a215dd3d1a203393f"} Mar 13 01:20:28.393344 master-0 kubenswrapper[19803]: I0313 01:20:28.393168 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:20:28.394877 master-0 kubenswrapper[19803]: E0313 01:20:28.393623 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle podName:80dda8c5-33c6-46ba-b4fa-8e4877de9187 nodeName:}" failed. No retries permitted until 2026-03-13 01:22:30.393581202 +0000 UTC m=+298.358729101 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "prometheus-trusted-ca-bundle" (UniqueName: "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle") pod "prometheus-k8s-0" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187") : configmap references non-existent config key: ca-bundle.crt Mar 13 01:20:32.747771 master-0 kubenswrapper[19803]: I0313 01:20:32.747658 19803 scope.go:117] "RemoveContainer" containerID="9ffa27ab0dc3e98ab44b8a36575c0b8aebd551a30b7af7d3a867758695337923" Mar 13 01:21:36.494592 master-0 kubenswrapper[19803]: I0313 01:21:36.494452 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 01:21:36.498249 master-0 kubenswrapper[19803]: I0313 01:21:36.498194 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.503059 master-0 kubenswrapper[19803]: I0313 01:21:36.502972 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd"/"kube-root-ca.crt" Mar 13 01:21:36.503436 master-0 kubenswrapper[19803]: I0313 01:21:36.503405 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd"/"installer-sa-dockercfg-xknvk" Mar 13 01:21:36.514635 master-0 kubenswrapper[19803]: I0313 01:21:36.514542 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 01:21:36.598785 master-0 kubenswrapper[19803]: I0313 01:21:36.598673 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3a989f-6c19-4f5d-b14f-369ed9941051-kube-api-access\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.598785 master-0 kubenswrapper[19803]: I0313 01:21:36.598746 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.599195 master-0 kubenswrapper[19803]: I0313 01:21:36.598940 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-var-lock\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.700097 master-0 kubenswrapper[19803]: I0313 01:21:36.699995 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-var-lock\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.700326 master-0 kubenswrapper[19803]: I0313 01:21:36.700151 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-var-lock\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.700326 master-0 kubenswrapper[19803]: I0313 01:21:36.700179 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3a989f-6c19-4f5d-b14f-369ed9941051-kube-api-access\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.700326 master-0 kubenswrapper[19803]: I0313 01:21:36.700303 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.700448 master-0 kubenswrapper[19803]: I0313 01:21:36.700399 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.719090 master-0 kubenswrapper[19803]: I0313 01:21:36.719032 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3a989f-6c19-4f5d-b14f-369ed9941051-kube-api-access\") pod \"installer-2-master-0\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:36.836063 master-0 kubenswrapper[19803]: I0313 01:21:36.835780 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 01:21:37.137645 master-0 kubenswrapper[19803]: I0313 01:21:37.137378 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd/installer-2-master-0"] Mar 13 01:21:37.152850 master-0 kubenswrapper[19803]: W0313 01:21:37.152061 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poddd3a989f_6c19_4f5d_b14f_369ed9941051.slice/crio-fed581821cbe4ecf53d53e5239f184430e9714c2ea0455427df415e068ef49e5 WatchSource:0}: Error finding container fed581821cbe4ecf53d53e5239f184430e9714c2ea0455427df415e068ef49e5: Status 404 returned error can't find the container with id fed581821cbe4ecf53d53e5239f184430e9714c2ea0455427df415e068ef49e5 Mar 13 01:21:38.150078 master-0 kubenswrapper[19803]: I0313 01:21:38.148845 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"dd3a989f-6c19-4f5d-b14f-369ed9941051","Type":"ContainerStarted","Data":"94e782c4fd48308e553bb97d16271a0c8d139701850895aec65301f10c7adeb8"} Mar 13 01:21:38.150078 master-0 kubenswrapper[19803]: I0313 01:21:38.148934 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"dd3a989f-6c19-4f5d-b14f-369ed9941051","Type":"ContainerStarted","Data":"fed581821cbe4ecf53d53e5239f184430e9714c2ea0455427df415e068ef49e5"} Mar 13 01:21:38.182723 master-0 kubenswrapper[19803]: I0313 01:21:38.176323 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/installer-2-master-0" podStartSLOduration=2.17629124 podStartE2EDuration="2.17629124s" podCreationTimestamp="2026-03-13 01:21:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:21:38.172985641 +0000 UTC m=+246.138133320" watchObservedRunningTime="2026-03-13 01:21:38.17629124 +0000 UTC m=+246.141438959" Mar 13 01:21:44.808023 master-0 kubenswrapper[19803]: I0313 01:21:44.807753 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 01:21:44.810694 master-0 kubenswrapper[19803]: I0313 01:21:44.810073 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.815328 master-0 kubenswrapper[19803]: I0313 01:21:44.815175 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-v4fzd" Mar 13 01:21:44.815827 master-0 kubenswrapper[19803]: I0313 01:21:44.815783 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 01:21:44.820823 master-0 kubenswrapper[19803]: I0313 01:21:44.820722 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 01:21:44.873971 master-0 kubenswrapper[19803]: I0313 01:21:44.873871 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6d93d3d-2899-4962-a25a-712e2fb9584b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.874165 master-0 kubenswrapper[19803]: I0313 01:21:44.873981 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-var-lock\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.874165 master-0 kubenswrapper[19803]: I0313 01:21:44.874064 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.975350 master-0 kubenswrapper[19803]: I0313 01:21:44.975280 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6d93d3d-2899-4962-a25a-712e2fb9584b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.975350 master-0 kubenswrapper[19803]: I0313 01:21:44.975344 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-var-lock\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.975733 master-0 kubenswrapper[19803]: I0313 01:21:44.975380 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.975998 master-0 kubenswrapper[19803]: I0313 01:21:44.975715 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-kubelet-dir\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:44.975998 master-0 kubenswrapper[19803]: I0313 01:21:44.975873 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-var-lock\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:45.007134 master-0 kubenswrapper[19803]: I0313 01:21:45.007027 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6d93d3d-2899-4962-a25a-712e2fb9584b-kube-api-access\") pod \"installer-5-master-0\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:45.193824 master-0 kubenswrapper[19803]: I0313 01:21:45.193494 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:21:45.699385 master-0 kubenswrapper[19803]: I0313 01:21:45.699257 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-master-0"] Mar 13 01:21:45.718987 master-0 kubenswrapper[19803]: W0313 01:21:45.718901 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-poda6d93d3d_2899_4962_a25a_712e2fb9584b.slice/crio-a7fbb77c663751e53bf6fc59da550dafa46cbd13e3be7b8632720b6247664384 WatchSource:0}: Error finding container a7fbb77c663751e53bf6fc59da550dafa46cbd13e3be7b8632720b6247664384: Status 404 returned error can't find the container with id a7fbb77c663751e53bf6fc59da550dafa46cbd13e3be7b8632720b6247664384 Mar 13 01:21:46.224845 master-0 kubenswrapper[19803]: I0313 01:21:46.224155 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"a6d93d3d-2899-4962-a25a-712e2fb9584b","Type":"ContainerStarted","Data":"be023853843b8ce8b0839a79e0987fc4270abfc7026b848ed76dc5c371fe5468"} Mar 13 01:21:46.224845 master-0 kubenswrapper[19803]: I0313 01:21:46.224346 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"a6d93d3d-2899-4962-a25a-712e2fb9584b","Type":"ContainerStarted","Data":"a7fbb77c663751e53bf6fc59da550dafa46cbd13e3be7b8632720b6247664384"} Mar 13 01:21:46.281448 master-0 kubenswrapper[19803]: I0313 01:21:46.281305 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-master-0" podStartSLOduration=2.281270073 podStartE2EDuration="2.281270073s" podCreationTimestamp="2026-03-13 01:21:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:21:46.276917557 +0000 UTC m=+254.242065286" watchObservedRunningTime="2026-03-13 01:21:46.281270073 +0000 UTC m=+254.246417792" Mar 13 01:21:46.819400 master-0 kubenswrapper[19803]: I0313 01:21:46.819278 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:21:46.819939 master-0 kubenswrapper[19803]: E0313 01:21:46.819563 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:21:46.819939 master-0 kubenswrapper[19803]: E0313 01:21:46.819648 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:21:46.819939 master-0 kubenswrapper[19803]: E0313 01:21:46.819762 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:23:48.819740949 +0000 UTC m=+376.784888638 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:21:46.819939 master-0 kubenswrapper[19803]: I0313 01:21:46.819766 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:21:46.819939 master-0 kubenswrapper[19803]: I0313 01:21:46.819864 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:21:46.820573 master-0 kubenswrapper[19803]: E0313 01:21:46.820425 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:21:46.820573 master-0 kubenswrapper[19803]: E0313 01:21:46.820541 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:21:46.820819 master-0 kubenswrapper[19803]: E0313 01:21:46.820665 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:23:48.82062621 +0000 UTC m=+376.785773929 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:21:46.824573 master-0 kubenswrapper[19803]: I0313 01:21:46.824305 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"installer-4-master-0\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " pod="openshift-kube-scheduler/installer-4-master-0" Mar 13 01:21:46.922293 master-0 kubenswrapper[19803]: I0313 01:21:46.922190 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") pod \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\" (UID: \"7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90\") " Mar 13 01:21:46.927423 master-0 kubenswrapper[19803]: I0313 01:21:46.927330 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90" (UID: "7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:21:47.025751 master-0 kubenswrapper[19803]: I0313 01:21:47.025656 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:21:49.857728 master-0 kubenswrapper[19803]: I0313 01:21:49.857667 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-thhrl"] Mar 13 01:21:49.859771 master-0 kubenswrapper[19803]: I0313 01:21:49.859749 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:49.862616 master-0 kubenswrapper[19803]: I0313 01:21:49.862555 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-r5kgk" Mar 13 01:21:49.862756 master-0 kubenswrapper[19803]: I0313 01:21:49.862572 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-sysctl-allowlist" Mar 13 01:21:49.994723 master-0 kubenswrapper[19803]: I0313 01:21:49.994658 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mghk\" (UniqueName: \"kubernetes.io/projected/4626655d-add4-4cbd-9ba7-7082f63db442-kube-api-access-8mghk\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:49.995214 master-0 kubenswrapper[19803]: I0313 01:21:49.995190 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4626655d-add4-4cbd-9ba7-7082f63db442-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:49.995362 master-0 kubenswrapper[19803]: I0313 01:21:49.995342 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4626655d-add4-4cbd-9ba7-7082f63db442-ready\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:49.995501 master-0 kubenswrapper[19803]: I0313 01:21:49.995483 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4626655d-add4-4cbd-9ba7-7082f63db442-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.097370 master-0 kubenswrapper[19803]: I0313 01:21:50.097290 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4626655d-add4-4cbd-9ba7-7082f63db442-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.097840 master-0 kubenswrapper[19803]: I0313 01:21:50.097503 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4626655d-add4-4cbd-9ba7-7082f63db442-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.098172 master-0 kubenswrapper[19803]: I0313 01:21:50.098117 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mghk\" (UniqueName: \"kubernetes.io/projected/4626655d-add4-4cbd-9ba7-7082f63db442-kube-api-access-8mghk\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.098415 master-0 kubenswrapper[19803]: I0313 01:21:50.098384 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4626655d-add4-4cbd-9ba7-7082f63db442-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.098532 master-0 kubenswrapper[19803]: I0313 01:21:50.098521 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4626655d-add4-4cbd-9ba7-7082f63db442-ready\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.099792 master-0 kubenswrapper[19803]: I0313 01:21:50.099704 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4626655d-add4-4cbd-9ba7-7082f63db442-ready\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.100117 master-0 kubenswrapper[19803]: I0313 01:21:50.100067 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4626655d-add4-4cbd-9ba7-7082f63db442-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.129162 master-0 kubenswrapper[19803]: I0313 01:21:50.128985 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mghk\" (UniqueName: \"kubernetes.io/projected/4626655d-add4-4cbd-9ba7-7082f63db442-kube-api-access-8mghk\") pod \"cni-sysctl-allowlist-ds-thhrl\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.179725 master-0 kubenswrapper[19803]: I0313 01:21:50.179656 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:50.216394 master-0 kubenswrapper[19803]: W0313 01:21:50.216326 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4626655d_add4_4cbd_9ba7_7082f63db442.slice/crio-ad97042f88d36875fd55084da168180520af2eff0a3094da53816fbf621d63db WatchSource:0}: Error finding container ad97042f88d36875fd55084da168180520af2eff0a3094da53816fbf621d63db: Status 404 returned error can't find the container with id ad97042f88d36875fd55084da168180520af2eff0a3094da53816fbf621d63db Mar 13 01:21:50.272107 master-0 kubenswrapper[19803]: I0313 01:21:50.272014 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" event={"ID":"4626655d-add4-4cbd-9ba7-7082f63db442","Type":"ContainerStarted","Data":"ad97042f88d36875fd55084da168180520af2eff0a3094da53816fbf621d63db"} Mar 13 01:21:51.281653 master-0 kubenswrapper[19803]: I0313 01:21:51.281581 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" event={"ID":"4626655d-add4-4cbd-9ba7-7082f63db442","Type":"ContainerStarted","Data":"758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0"} Mar 13 01:21:51.282186 master-0 kubenswrapper[19803]: I0313 01:21:51.281833 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:51.318929 master-0 kubenswrapper[19803]: I0313 01:21:51.318779 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" podStartSLOduration=2.318741691 podStartE2EDuration="2.318741691s" podCreationTimestamp="2026-03-13 01:21:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:21:51.313089465 +0000 UTC m=+259.278237144" watchObservedRunningTime="2026-03-13 01:21:51.318741691 +0000 UTC m=+259.283889410" Mar 13 01:21:51.324609 master-0 kubenswrapper[19803]: I0313 01:21:51.324267 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:21:51.854239 master-0 kubenswrapper[19803]: I0313 01:21:51.854154 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-thhrl"] Mar 13 01:21:53.301907 master-0 kubenswrapper[19803]: I0313 01:21:53.301795 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" gracePeriod=30 Mar 13 01:21:58.212857 master-0 kubenswrapper[19803]: E0313 01:21:58.212759 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[trusted-ca], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" podUID="a1d1a41c-8533-4854-abea-ed42c4d7c71f" Mar 13 01:21:58.377558 master-0 kubenswrapper[19803]: I0313 01:21:58.377437 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:21:59.921578 master-0 kubenswrapper[19803]: I0313 01:21:59.921491 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr"] Mar 13 01:21:59.922644 master-0 kubenswrapper[19803]: I0313 01:21:59.922598 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:21:59.925382 master-0 kubenswrapper[19803]: I0313 01:21:59.925322 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-kmk7p" Mar 13 01:21:59.949164 master-0 kubenswrapper[19803]: I0313 01:21:59.949083 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr"] Mar 13 01:22:00.028130 master-0 kubenswrapper[19803]: I0313 01:22:00.027537 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41ab5042-7d9a-4b2d-b00b-cd5159313262-webhook-certs\") pod \"multus-admission-controller-56bbfd46b8-fb5cr\" (UID: \"41ab5042-7d9a-4b2d-b00b-cd5159313262\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:22:00.028130 master-0 kubenswrapper[19803]: I0313 01:22:00.027810 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbmc6\" (UniqueName: \"kubernetes.io/projected/41ab5042-7d9a-4b2d-b00b-cd5159313262-kube-api-access-dbmc6\") pod \"multus-admission-controller-56bbfd46b8-fb5cr\" (UID: \"41ab5042-7d9a-4b2d-b00b-cd5159313262\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:22:00.133535 master-0 kubenswrapper[19803]: I0313 01:22:00.132729 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41ab5042-7d9a-4b2d-b00b-cd5159313262-webhook-certs\") pod \"multus-admission-controller-56bbfd46b8-fb5cr\" (UID: \"41ab5042-7d9a-4b2d-b00b-cd5159313262\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:22:00.133535 master-0 kubenswrapper[19803]: I0313 01:22:00.132848 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dbmc6\" (UniqueName: \"kubernetes.io/projected/41ab5042-7d9a-4b2d-b00b-cd5159313262-kube-api-access-dbmc6\") pod \"multus-admission-controller-56bbfd46b8-fb5cr\" (UID: \"41ab5042-7d9a-4b2d-b00b-cd5159313262\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:22:00.142197 master-0 kubenswrapper[19803]: I0313 01:22:00.139013 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/41ab5042-7d9a-4b2d-b00b-cd5159313262-webhook-certs\") pod \"multus-admission-controller-56bbfd46b8-fb5cr\" (UID: \"41ab5042-7d9a-4b2d-b00b-cd5159313262\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:22:00.163298 master-0 kubenswrapper[19803]: I0313 01:22:00.163239 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dbmc6\" (UniqueName: \"kubernetes.io/projected/41ab5042-7d9a-4b2d-b00b-cd5159313262-kube-api-access-dbmc6\") pod \"multus-admission-controller-56bbfd46b8-fb5cr\" (UID: \"41ab5042-7d9a-4b2d-b00b-cd5159313262\") " pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:22:00.184952 master-0 kubenswrapper[19803]: E0313 01:22:00.184833 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:00.190523 master-0 kubenswrapper[19803]: E0313 01:22:00.188881 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:00.192591 master-0 kubenswrapper[19803]: E0313 01:22:00.190525 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:00.192591 master-0 kubenswrapper[19803]: E0313 01:22:00.190650 19803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" containerName="kube-multus-additional-cni-plugins" Mar 13 01:22:00.286235 master-0 kubenswrapper[19803]: I0313 01:22:00.286114 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" Mar 13 01:22:00.772930 master-0 kubenswrapper[19803]: I0313 01:22:00.772832 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr"] Mar 13 01:22:00.779553 master-0 kubenswrapper[19803]: W0313 01:22:00.778870 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod41ab5042_7d9a_4b2d_b00b_cd5159313262.slice/crio-4bccaa45a1ad23243301ed851909c453d4070fcc3d0baa9039d43a169c3d9252 WatchSource:0}: Error finding container 4bccaa45a1ad23243301ed851909c453d4070fcc3d0baa9039d43a169c3d9252: Status 404 returned error can't find the container with id 4bccaa45a1ad23243301ed851909c453d4070fcc3d0baa9039d43a169c3d9252 Mar 13 01:22:01.418118 master-0 kubenswrapper[19803]: I0313 01:22:01.417929 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" event={"ID":"41ab5042-7d9a-4b2d-b00b-cd5159313262","Type":"ContainerStarted","Data":"16ad45f0a6169611d1ee79535753acc1c705b1e5260df7995c85d72cc6e15020"} Mar 13 01:22:01.418118 master-0 kubenswrapper[19803]: I0313 01:22:01.418001 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" event={"ID":"41ab5042-7d9a-4b2d-b00b-cd5159313262","Type":"ContainerStarted","Data":"4bccaa45a1ad23243301ed851909c453d4070fcc3d0baa9039d43a169c3d9252"} Mar 13 01:22:01.461997 master-0 kubenswrapper[19803]: I0313 01:22:01.461868 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:22:01.463961 master-0 kubenswrapper[19803]: I0313 01:22:01.463903 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a1d1a41c-8533-4854-abea-ed42c4d7c71f-trusted-ca\") pod \"console-operator-6c7fb6b958-4cbn4\" (UID: \"a1d1a41c-8533-4854-abea-ed42c4d7c71f\") " pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:22:01.681959 master-0 kubenswrapper[19803]: I0313 01:22:01.681778 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-g9s2p" Mar 13 01:22:01.689788 master-0 kubenswrapper[19803]: I0313 01:22:01.689741 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:22:02.218592 master-0 kubenswrapper[19803]: I0313 01:22:02.218134 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-6c7fb6b958-4cbn4"] Mar 13 01:22:02.231006 master-0 kubenswrapper[19803]: W0313 01:22:02.230895 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1d1a41c_8533_4854_abea_ed42c4d7c71f.slice/crio-59fa11b30ce51fdc38722997f6546a1b7520ed390ef13992e397b45647ed73a4 WatchSource:0}: Error finding container 59fa11b30ce51fdc38722997f6546a1b7520ed390ef13992e397b45647ed73a4: Status 404 returned error can't find the container with id 59fa11b30ce51fdc38722997f6546a1b7520ed390ef13992e397b45647ed73a4 Mar 13 01:22:02.435545 master-0 kubenswrapper[19803]: I0313 01:22:02.435420 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" event={"ID":"a1d1a41c-8533-4854-abea-ed42c4d7c71f","Type":"ContainerStarted","Data":"59fa11b30ce51fdc38722997f6546a1b7520ed390ef13992e397b45647ed73a4"} Mar 13 01:22:02.439826 master-0 kubenswrapper[19803]: I0313 01:22:02.439779 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" event={"ID":"41ab5042-7d9a-4b2d-b00b-cd5159313262","Type":"ContainerStarted","Data":"80f1dc3d87ac4a9d19358e1c56bdb41ae3e51be7e22eb90ca4bbb41066e5968f"} Mar 13 01:22:02.475432 master-0 kubenswrapper[19803]: I0313 01:22:02.472838 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-56bbfd46b8-fb5cr" podStartSLOduration=3.472788434 podStartE2EDuration="3.472788434s" podCreationTimestamp="2026-03-13 01:21:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:22:02.467725522 +0000 UTC m=+270.432873261" watchObservedRunningTime="2026-03-13 01:22:02.472788434 +0000 UTC m=+270.437936143" Mar 13 01:22:02.516164 master-0 kubenswrapper[19803]: I0313 01:22:02.515874 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddtwn"] Mar 13 01:22:02.517198 master-0 kubenswrapper[19803]: I0313 01:22:02.516480 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="multus-admission-controller" containerID="cri-o://8f8f696e9a8bf7dc6e42d0e7944725436b3a7019ffcb294c234c413493797ce3" gracePeriod=30 Mar 13 01:22:02.519624 master-0 kubenswrapper[19803]: I0313 01:22:02.517568 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="kube-rbac-proxy" containerID="cri-o://29a58358b12bdde755e9400ad8a4200dcdb32c73e3b68b4a2a8493087061b74e" gracePeriod=30 Mar 13 01:22:03.453365 master-0 kubenswrapper[19803]: I0313 01:22:03.453266 19803 generic.go:334] "Generic (PLEG): container finished" podID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerID="29a58358b12bdde755e9400ad8a4200dcdb32c73e3b68b4a2a8493087061b74e" exitCode=0 Mar 13 01:22:03.454144 master-0 kubenswrapper[19803]: I0313 01:22:03.453352 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" event={"ID":"161d2fa6-a541-427a-a3e9-3297102a26f5","Type":"ContainerDied","Data":"29a58358b12bdde755e9400ad8a4200dcdb32c73e3b68b4a2a8493087061b74e"} Mar 13 01:22:06.476165 master-0 kubenswrapper[19803]: I0313 01:22:06.476112 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" event={"ID":"a1d1a41c-8533-4854-abea-ed42c4d7c71f","Type":"ContainerStarted","Data":"7c52ad92cbf56dadaa05ef57f670afb07598809def8accfde3ca4b5cc2f712d9"} Mar 13 01:22:06.476900 master-0 kubenswrapper[19803]: I0313 01:22:06.476723 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:22:06.657876 master-0 kubenswrapper[19803]: I0313 01:22:06.657757 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" Mar 13 01:22:06.685088 master-0 kubenswrapper[19803]: I0313 01:22:06.685010 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-6c7fb6b958-4cbn4" podStartSLOduration=252.207580811 podStartE2EDuration="4m15.684995856s" podCreationTimestamp="2026-03-13 01:17:51 +0000 UTC" firstStartedPulling="2026-03-13 01:22:02.234459867 +0000 UTC m=+270.199607586" lastFinishedPulling="2026-03-13 01:22:05.711874912 +0000 UTC m=+273.677022631" observedRunningTime="2026-03-13 01:22:06.506174996 +0000 UTC m=+274.471322745" watchObservedRunningTime="2026-03-13 01:22:06.684995856 +0000 UTC m=+274.650143535" Mar 13 01:22:06.876170 master-0 kubenswrapper[19803]: I0313 01:22:06.876101 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-84f57b9877-ffb2n"] Mar 13 01:22:06.877163 master-0 kubenswrapper[19803]: I0313 01:22:06.877140 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-ffb2n" Mar 13 01:22:06.879193 master-0 kubenswrapper[19803]: I0313 01:22:06.879140 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 01:22:06.879560 master-0 kubenswrapper[19803]: I0313 01:22:06.879437 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 01:22:06.879846 master-0 kubenswrapper[19803]: I0313 01:22:06.879705 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-m8298" Mar 13 01:22:06.892313 master-0 kubenswrapper[19803]: I0313 01:22:06.892249 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-ffb2n"] Mar 13 01:22:06.916650 master-0 kubenswrapper[19803]: I0313 01:22:06.916451 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc57j\" (UniqueName: \"kubernetes.io/projected/cd044580-0236-4ee8-9a26-b8513e400238-kube-api-access-cc57j\") pod \"downloads-84f57b9877-ffb2n\" (UID: \"cd044580-0236-4ee8-9a26-b8513e400238\") " pod="openshift-console/downloads-84f57b9877-ffb2n" Mar 13 01:22:07.017540 master-0 kubenswrapper[19803]: I0313 01:22:07.017450 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cc57j\" (UniqueName: \"kubernetes.io/projected/cd044580-0236-4ee8-9a26-b8513e400238-kube-api-access-cc57j\") pod \"downloads-84f57b9877-ffb2n\" (UID: \"cd044580-0236-4ee8-9a26-b8513e400238\") " pod="openshift-console/downloads-84f57b9877-ffb2n" Mar 13 01:22:07.033543 master-0 kubenswrapper[19803]: I0313 01:22:07.033462 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cc57j\" (UniqueName: \"kubernetes.io/projected/cd044580-0236-4ee8-9a26-b8513e400238-kube-api-access-cc57j\") pod \"downloads-84f57b9877-ffb2n\" (UID: \"cd044580-0236-4ee8-9a26-b8513e400238\") " pod="openshift-console/downloads-84f57b9877-ffb2n" Mar 13 01:22:07.202324 master-0 kubenswrapper[19803]: I0313 01:22:07.202168 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-84f57b9877-ffb2n" Mar 13 01:22:07.633973 master-0 kubenswrapper[19803]: I0313 01:22:07.633909 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-84f57b9877-ffb2n"] Mar 13 01:22:07.642708 master-0 kubenswrapper[19803]: W0313 01:22:07.641794 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd044580_0236_4ee8_9a26_b8513e400238.slice/crio-66832632dec5209f2624e5469d3296eeed214153127a1acd8ca9017cbc013e3d WatchSource:0}: Error finding container 66832632dec5209f2624e5469d3296eeed214153127a1acd8ca9017cbc013e3d: Status 404 returned error can't find the container with id 66832632dec5209f2624e5469d3296eeed214153127a1acd8ca9017cbc013e3d Mar 13 01:22:08.495189 master-0 kubenswrapper[19803]: I0313 01:22:08.495067 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-ffb2n" event={"ID":"cd044580-0236-4ee8-9a26-b8513e400238","Type":"ContainerStarted","Data":"66832632dec5209f2624e5469d3296eeed214153127a1acd8ca9017cbc013e3d"} Mar 13 01:22:09.203327 master-0 kubenswrapper[19803]: I0313 01:22:09.203249 19803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 01:22:09.203964 master-0 kubenswrapper[19803]: I0313 01:22:09.203914 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" containerID="cri-o://59b81ddf96703b46c61723679f4eccced325378be4bf3ce47532a5cf8c25aff1" gracePeriod=30 Mar 13 01:22:09.204120 master-0 kubenswrapper[19803]: I0313 01:22:09.204066 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" containerID="cri-o://3b4b0099ff3715076e4da8c307cf4cdf19113ad975d741008a026d470fd6e8de" gracePeriod=30 Mar 13 01:22:09.204209 master-0 kubenswrapper[19803]: I0313 01:22:09.204001 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" containerID="cri-o://bf41e0708018a7a42a9ea985f7ec3256a3866f84520062060092284abe939c72" gracePeriod=30 Mar 13 01:22:09.204209 master-0 kubenswrapper[19803]: I0313 01:22:09.204006 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" containerID="cri-o://11afe1e82df06ef58f2b34ee7f14cab6582b1c3ebb23e73f966071d3f60bb7d3" gracePeriod=30 Mar 13 01:22:09.204317 master-0 kubenswrapper[19803]: I0313 01:22:09.203989 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-etcd/etcd-master-0" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" containerID="cri-o://d4307a8d99b06baad18f959ac230bad4c2bf7ab603532b53714a7efb8d542993" gracePeriod=30 Mar 13 01:22:09.206712 master-0 kubenswrapper[19803]: I0313 01:22:09.206680 19803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-master-0"] Mar 13 01:22:09.207052 master-0 kubenswrapper[19803]: E0313 01:22:09.207024 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 01:22:09.207106 master-0 kubenswrapper[19803]: I0313 01:22:09.207075 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 01:22:09.207106 master-0 kubenswrapper[19803]: E0313 01:22:09.207097 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 01:22:09.207106 master-0 kubenswrapper[19803]: I0313 01:22:09.207103 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 01:22:09.207106 master-0 kubenswrapper[19803]: E0313 01:22:09.207121 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 01:22:09.207106 master-0 kubenswrapper[19803]: I0313 01:22:09.207127 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: E0313 01:22:09.207140 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: I0313 01:22:09.207147 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: E0313 01:22:09.207156 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: I0313 01:22:09.207164 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: E0313 01:22:09.207175 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: I0313 01:22:09.207181 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: E0313 01:22:09.207191 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: I0313 01:22:09.207197 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: E0313 01:22:09.207204 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 01:22:09.207329 master-0 kubenswrapper[19803]: I0313 01:22:09.207210 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207364 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-readyz" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207387 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-rev" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207396 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207408 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-metrics" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207421 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="setup" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207435 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-resources-copy" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207445 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcdctl" Mar 13 01:22:09.207735 master-0 kubenswrapper[19803]: I0313 01:22:09.207452 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" containerName="etcd-ensure-env-vars" Mar 13 01:22:09.262097 master-0 kubenswrapper[19803]: I0313 01:22:09.262014 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.262264 master-0 kubenswrapper[19803]: I0313 01:22:09.262123 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.262264 master-0 kubenswrapper[19803]: I0313 01:22:09.262149 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.262264 master-0 kubenswrapper[19803]: I0313 01:22:09.262188 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.262264 master-0 kubenswrapper[19803]: I0313 01:22:09.262238 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.262483 master-0 kubenswrapper[19803]: I0313 01:22:09.262439 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364062 master-0 kubenswrapper[19803]: I0313 01:22:09.364015 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364163 master-0 kubenswrapper[19803]: I0313 01:22:09.364098 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364163 master-0 kubenswrapper[19803]: I0313 01:22:09.364139 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364163 master-0 kubenswrapper[19803]: I0313 01:22:09.364140 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-resource-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364293 master-0 kubenswrapper[19803]: I0313 01:22:09.364213 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364372 master-0 kubenswrapper[19803]: I0313 01:22:09.364313 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-cert-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364464 master-0 kubenswrapper[19803]: I0313 01:22:09.364428 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-usr-local-bin\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364464 master-0 kubenswrapper[19803]: I0313 01:22:09.364425 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-log-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364594 master-0 kubenswrapper[19803]: I0313 01:22:09.364463 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364594 master-0 kubenswrapper[19803]: I0313 01:22:09.364561 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-data-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364744 master-0 kubenswrapper[19803]: I0313 01:22:09.364722 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.364793 master-0 kubenswrapper[19803]: I0313 01:22:09.364761 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/29c709c82970b529e7b9b895aa92ef05-static-pod-dir\") pod \"etcd-master-0\" (UID: \"29c709c82970b529e7b9b895aa92ef05\") " pod="openshift-etcd/etcd-master-0" Mar 13 01:22:09.506465 master-0 kubenswrapper[19803]: I0313 01:22:09.506420 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 01:22:09.507288 master-0 kubenswrapper[19803]: I0313 01:22:09.507269 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 01:22:09.509066 master-0 kubenswrapper[19803]: I0313 01:22:09.509039 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="d4307a8d99b06baad18f959ac230bad4c2bf7ab603532b53714a7efb8d542993" exitCode=2 Mar 13 01:22:09.509066 master-0 kubenswrapper[19803]: I0313 01:22:09.509063 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="11afe1e82df06ef58f2b34ee7f14cab6582b1c3ebb23e73f966071d3f60bb7d3" exitCode=0 Mar 13 01:22:09.509155 master-0 kubenswrapper[19803]: I0313 01:22:09.509073 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="bf41e0708018a7a42a9ea985f7ec3256a3866f84520062060092284abe939c72" exitCode=2 Mar 13 01:22:10.183565 master-0 kubenswrapper[19803]: E0313 01:22:10.183443 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:10.185470 master-0 kubenswrapper[19803]: E0313 01:22:10.185366 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:10.186694 master-0 kubenswrapper[19803]: E0313 01:22:10.186637 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:10.186803 master-0 kubenswrapper[19803]: E0313 01:22:10.186689 19803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" containerName="kube-multus-additional-cni-plugins" Mar 13 01:22:19.396480 master-0 kubenswrapper[19803]: E0313 01:22:19.396137 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:22:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:22:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:22:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:22:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:19.433935 master-0 kubenswrapper[19803]: E0313 01:22:19.433792 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:20.182187 master-0 kubenswrapper[19803]: E0313 01:22:20.182100 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:20.183853 master-0 kubenswrapper[19803]: E0313 01:22:20.183585 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:20.185041 master-0 kubenswrapper[19803]: E0313 01:22:20.184997 19803 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" cmd=["/bin/bash","-c","test -f /ready/ready"] Mar 13 01:22:20.185136 master-0 kubenswrapper[19803]: E0313 01:22:20.185052 19803 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" containerName="kube-multus-additional-cni-plugins" Mar 13 01:22:20.414603 master-0 kubenswrapper[19803]: E0313 01:22:20.414418 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[alertmanager-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" Mar 13 01:22:20.622055 master-0 kubenswrapper[19803]: I0313 01:22:20.621978 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:22:23.466611 master-0 kubenswrapper[19803]: I0313 01:22:23.466558 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-thhrl_4626655d-add4-4cbd-9ba7-7082f63db442/kube-multus-additional-cni-plugins/0.log" Mar 13 01:22:23.467301 master-0 kubenswrapper[19803]: I0313 01:22:23.466650 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:22:23.473619 master-0 kubenswrapper[19803]: I0313 01:22:23.473567 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/kube-controller-manager namespace/openshift-kube-controller-manager: Liveness probe status=failure output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" start-of-body= Mar 13 01:22:23.473705 master-0 kubenswrapper[19803]: I0313 01:22:23.473639 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused" Mar 13 01:22:23.611008 master-0 kubenswrapper[19803]: I0313 01:22:23.610847 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4626655d-add4-4cbd-9ba7-7082f63db442-ready\") pod \"4626655d-add4-4cbd-9ba7-7082f63db442\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " Mar 13 01:22:23.611369 master-0 kubenswrapper[19803]: I0313 01:22:23.611036 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4626655d-add4-4cbd-9ba7-7082f63db442-tuning-conf-dir\") pod \"4626655d-add4-4cbd-9ba7-7082f63db442\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " Mar 13 01:22:23.611369 master-0 kubenswrapper[19803]: I0313 01:22:23.611244 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4626655d-add4-4cbd-9ba7-7082f63db442-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "4626655d-add4-4cbd-9ba7-7082f63db442" (UID: "4626655d-add4-4cbd-9ba7-7082f63db442"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:23.611369 master-0 kubenswrapper[19803]: I0313 01:22:23.611253 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mghk\" (UniqueName: \"kubernetes.io/projected/4626655d-add4-4cbd-9ba7-7082f63db442-kube-api-access-8mghk\") pod \"4626655d-add4-4cbd-9ba7-7082f63db442\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " Mar 13 01:22:23.611552 master-0 kubenswrapper[19803]: I0313 01:22:23.611367 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4626655d-add4-4cbd-9ba7-7082f63db442-ready" (OuterVolumeSpecName: "ready") pod "4626655d-add4-4cbd-9ba7-7082f63db442" (UID: "4626655d-add4-4cbd-9ba7-7082f63db442"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:22:23.611552 master-0 kubenswrapper[19803]: I0313 01:22:23.611397 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4626655d-add4-4cbd-9ba7-7082f63db442-cni-sysctl-allowlist\") pod \"4626655d-add4-4cbd-9ba7-7082f63db442\" (UID: \"4626655d-add4-4cbd-9ba7-7082f63db442\") " Mar 13 01:22:23.612266 master-0 kubenswrapper[19803]: I0313 01:22:23.612210 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4626655d-add4-4cbd-9ba7-7082f63db442-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "4626655d-add4-4cbd-9ba7-7082f63db442" (UID: "4626655d-add4-4cbd-9ba7-7082f63db442"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:22:23.612659 master-0 kubenswrapper[19803]: I0313 01:22:23.612623 19803 reconciler_common.go:293] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4626655d-add4-4cbd-9ba7-7082f63db442-tuning-conf-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:23.612739 master-0 kubenswrapper[19803]: I0313 01:22:23.612671 19803 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4626655d-add4-4cbd-9ba7-7082f63db442-cni-sysctl-allowlist\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:23.612739 master-0 kubenswrapper[19803]: I0313 01:22:23.612704 19803 reconciler_common.go:293] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/4626655d-add4-4cbd-9ba7-7082f63db442-ready\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:23.615922 master-0 kubenswrapper[19803]: I0313 01:22:23.615859 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4626655d-add4-4cbd-9ba7-7082f63db442-kube-api-access-8mghk" (OuterVolumeSpecName: "kube-api-access-8mghk") pod "4626655d-add4-4cbd-9ba7-7082f63db442" (UID: "4626655d-add4-4cbd-9ba7-7082f63db442"). InnerVolumeSpecName "kube-api-access-8mghk". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:22:23.655151 master-0 kubenswrapper[19803]: I0313 01:22:23.655073 19803 generic.go:334] "Generic (PLEG): container finished" podID="dd3a989f-6c19-4f5d-b14f-369ed9941051" containerID="94e782c4fd48308e553bb97d16271a0c8d139701850895aec65301f10c7adeb8" exitCode=0 Mar 13 01:22:23.655305 master-0 kubenswrapper[19803]: I0313 01:22:23.655238 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"dd3a989f-6c19-4f5d-b14f-369ed9941051","Type":"ContainerDied","Data":"94e782c4fd48308e553bb97d16271a0c8d139701850895aec65301f10c7adeb8"} Mar 13 01:22:23.657665 master-0 kubenswrapper[19803]: I0313 01:22:23.657613 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-thhrl_4626655d-add4-4cbd-9ba7-7082f63db442/kube-multus-additional-cni-plugins/0.log" Mar 13 01:22:23.657754 master-0 kubenswrapper[19803]: I0313 01:22:23.657694 19803 generic.go:334] "Generic (PLEG): container finished" podID="4626655d-add4-4cbd-9ba7-7082f63db442" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" exitCode=137 Mar 13 01:22:23.657893 master-0 kubenswrapper[19803]: I0313 01:22:23.657832 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" Mar 13 01:22:23.657893 master-0 kubenswrapper[19803]: I0313 01:22:23.657859 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" event={"ID":"4626655d-add4-4cbd-9ba7-7082f63db442","Type":"ContainerDied","Data":"758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0"} Mar 13 01:22:23.658047 master-0 kubenswrapper[19803]: I0313 01:22:23.657973 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" event={"ID":"4626655d-add4-4cbd-9ba7-7082f63db442","Type":"ContainerDied","Data":"ad97042f88d36875fd55084da168180520af2eff0a3094da53816fbf621d63db"} Mar 13 01:22:23.658047 master-0 kubenswrapper[19803]: I0313 01:22:23.658004 19803 scope.go:117] "RemoveContainer" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" Mar 13 01:22:23.662386 master-0 kubenswrapper[19803]: I0313 01:22:23.662351 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:22:23.662452 master-0 kubenswrapper[19803]: I0313 01:22:23.662430 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" exitCode=1 Mar 13 01:22:23.662495 master-0 kubenswrapper[19803]: I0313 01:22:23.662472 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerDied","Data":"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82"} Mar 13 01:22:23.663259 master-0 kubenswrapper[19803]: I0313 01:22:23.663224 19803 scope.go:117] "RemoveContainer" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" Mar 13 01:22:23.682406 master-0 kubenswrapper[19803]: I0313 01:22:23.682369 19803 scope.go:117] "RemoveContainer" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" Mar 13 01:22:23.683171 master-0 kubenswrapper[19803]: E0313 01:22:23.682958 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0\": container with ID starting with 758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0 not found: ID does not exist" containerID="758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0" Mar 13 01:22:23.683171 master-0 kubenswrapper[19803]: I0313 01:22:23.683069 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0"} err="failed to get container status \"758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0\": rpc error: code = NotFound desc = could not find container \"758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0\": container with ID starting with 758b44fd04cfb9f8f8829a72278b4485207381de6cb60d9676f6f44f57bc94f0 not found: ID does not exist" Mar 13 01:22:23.714965 master-0 kubenswrapper[19803]: I0313 01:22:23.714903 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mghk\" (UniqueName: \"kubernetes.io/projected/4626655d-add4-4cbd-9ba7-7082f63db442-kube-api-access-8mghk\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:24.155557 master-0 kubenswrapper[19803]: I0313 01:22:24.155349 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:22:24.675439 master-0 kubenswrapper[19803]: I0313 01:22:24.675346 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:22:24.676311 master-0 kubenswrapper[19803]: I0313 01:22:24.675593 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635"} Mar 13 01:22:24.732037 master-0 kubenswrapper[19803]: I0313 01:22:24.731950 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:22:24.734339 master-0 kubenswrapper[19803]: I0313 01:22:24.734261 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:22:24.825734 master-0 kubenswrapper[19803]: I0313 01:22:24.825641 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wq6hg" Mar 13 01:22:24.834341 master-0 kubenswrapper[19803]: I0313 01:22:24.834280 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:22:24.983051 master-0 kubenswrapper[19803]: I0313 01:22:24.983008 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 01:22:25.138081 master-0 kubenswrapper[19803]: I0313 01:22:25.138037 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-kubelet-dir\") pod \"dd3a989f-6c19-4f5d-b14f-369ed9941051\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " Mar 13 01:22:25.138432 master-0 kubenswrapper[19803]: I0313 01:22:25.138255 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-var-lock\") pod \"dd3a989f-6c19-4f5d-b14f-369ed9941051\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " Mar 13 01:22:25.138432 master-0 kubenswrapper[19803]: I0313 01:22:25.138284 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3a989f-6c19-4f5d-b14f-369ed9941051-kube-api-access\") pod \"dd3a989f-6c19-4f5d-b14f-369ed9941051\" (UID: \"dd3a989f-6c19-4f5d-b14f-369ed9941051\") " Mar 13 01:22:25.138504 master-0 kubenswrapper[19803]: I0313 01:22:25.138465 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "dd3a989f-6c19-4f5d-b14f-369ed9941051" (UID: "dd3a989f-6c19-4f5d-b14f-369ed9941051"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:25.138559 master-0 kubenswrapper[19803]: I0313 01:22:25.138528 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-var-lock" (OuterVolumeSpecName: "var-lock") pod "dd3a989f-6c19-4f5d-b14f-369ed9941051" (UID: "dd3a989f-6c19-4f5d-b14f-369ed9941051"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:25.138636 master-0 kubenswrapper[19803]: I0313 01:22:25.138618 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:25.138672 master-0 kubenswrapper[19803]: I0313 01:22:25.138638 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/dd3a989f-6c19-4f5d-b14f-369ed9941051-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:25.141136 master-0 kubenswrapper[19803]: I0313 01:22:25.141101 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd3a989f-6c19-4f5d-b14f-369ed9941051-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "dd3a989f-6c19-4f5d-b14f-369ed9941051" (UID: "dd3a989f-6c19-4f5d-b14f-369ed9941051"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:22:25.240079 master-0 kubenswrapper[19803]: I0313 01:22:25.240010 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/dd3a989f-6c19-4f5d-b14f-369ed9941051-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:25.690897 master-0 kubenswrapper[19803]: I0313 01:22:25.690719 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/installer-2-master-0" Mar 13 01:22:25.690897 master-0 kubenswrapper[19803]: I0313 01:22:25.690699 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/installer-2-master-0" event={"ID":"dd3a989f-6c19-4f5d-b14f-369ed9941051","Type":"ContainerDied","Data":"fed581821cbe4ecf53d53e5239f184430e9714c2ea0455427df415e068ef49e5"} Mar 13 01:22:25.690897 master-0 kubenswrapper[19803]: I0313 01:22:25.690841 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fed581821cbe4ecf53d53e5239f184430e9714c2ea0455427df415e068ef49e5" Mar 13 01:22:26.462621 master-0 kubenswrapper[19803]: E0313 01:22:26.462495 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[prometheus-trusted-ca-bundle], unattached volumes=[], failed to process volumes=[]: context deadline exceeded" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" Mar 13 01:22:26.697585 master-0 kubenswrapper[19803]: I0313 01:22:26.696988 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:22:26.799449 master-0 kubenswrapper[19803]: I0313 01:22:26.799334 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:22:26.805981 master-0 kubenswrapper[19803]: I0313 01:22:26.805942 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:22:27.704529 master-0 kubenswrapper[19803]: I0313 01:22:27.704428 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:22:29.397284 master-0 kubenswrapper[19803]: E0313 01:22:29.397177 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:29.435203 master-0 kubenswrapper[19803]: E0313 01:22:29.435106 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:30.430951 master-0 kubenswrapper[19803]: I0313 01:22:30.430908 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:22:30.432152 master-0 kubenswrapper[19803]: I0313 01:22:30.432118 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:22:30.600550 master-0 kubenswrapper[19803]: I0313 01:22:30.600457 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-f4w47" Mar 13 01:22:30.609668 master-0 kubenswrapper[19803]: I0313 01:22:30.609612 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:22:34.164242 master-0 kubenswrapper[19803]: I0313 01:22:34.164164 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:22:37.784300 master-0 kubenswrapper[19803]: I0313 01:22:37.783965 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-ddtwn_161d2fa6-a541-427a-a3e9-3297102a26f5/multus-admission-controller/0.log" Mar 13 01:22:37.784300 master-0 kubenswrapper[19803]: I0313 01:22:37.784046 19803 generic.go:334] "Generic (PLEG): container finished" podID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerID="8f8f696e9a8bf7dc6e42d0e7944725436b3a7019ffcb294c234c413493797ce3" exitCode=137 Mar 13 01:22:37.784300 master-0 kubenswrapper[19803]: I0313 01:22:37.784137 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" event={"ID":"161d2fa6-a541-427a-a3e9-3297102a26f5","Type":"ContainerDied","Data":"8f8f696e9a8bf7dc6e42d0e7944725436b3a7019ffcb294c234c413493797ce3"} Mar 13 01:22:37.786207 master-0 kubenswrapper[19803]: I0313 01:22:37.786156 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_a6d93d3d-2899-4962-a25a-712e2fb9584b/installer/0.log" Mar 13 01:22:37.786327 master-0 kubenswrapper[19803]: I0313 01:22:37.786277 19803 generic.go:334] "Generic (PLEG): container finished" podID="a6d93d3d-2899-4962-a25a-712e2fb9584b" containerID="be023853843b8ce8b0839a79e0987fc4270abfc7026b848ed76dc5c371fe5468" exitCode=1 Mar 13 01:22:37.786382 master-0 kubenswrapper[19803]: I0313 01:22:37.786332 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"a6d93d3d-2899-4962-a25a-712e2fb9584b","Type":"ContainerDied","Data":"be023853843b8ce8b0839a79e0987fc4270abfc7026b848ed76dc5c371fe5468"} Mar 13 01:22:38.257680 master-0 kubenswrapper[19803]: I0313 01:22:38.257622 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-ddtwn_161d2fa6-a541-427a-a3e9-3297102a26f5/multus-admission-controller/0.log" Mar 13 01:22:38.257858 master-0 kubenswrapper[19803]: I0313 01:22:38.257738 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:22:38.273662 master-0 kubenswrapper[19803]: I0313 01:22:38.273492 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") pod \"161d2fa6-a541-427a-a3e9-3297102a26f5\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " Mar 13 01:22:38.273852 master-0 kubenswrapper[19803]: I0313 01:22:38.273666 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") pod \"161d2fa6-a541-427a-a3e9-3297102a26f5\" (UID: \"161d2fa6-a541-427a-a3e9-3297102a26f5\") " Mar 13 01:22:38.277997 master-0 kubenswrapper[19803]: I0313 01:22:38.277945 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "161d2fa6-a541-427a-a3e9-3297102a26f5" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:22:38.280132 master-0 kubenswrapper[19803]: I0313 01:22:38.280059 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5" (OuterVolumeSpecName: "kube-api-access-q5lg5") pod "161d2fa6-a541-427a-a3e9-3297102a26f5" (UID: "161d2fa6-a541-427a-a3e9-3297102a26f5"). InnerVolumeSpecName "kube-api-access-q5lg5". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:22:38.377277 master-0 kubenswrapper[19803]: I0313 01:22:38.377185 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5lg5\" (UniqueName: \"kubernetes.io/projected/161d2fa6-a541-427a-a3e9-3297102a26f5-kube-api-access-q5lg5\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:38.377277 master-0 kubenswrapper[19803]: I0313 01:22:38.377236 19803 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/161d2fa6-a541-427a-a3e9-3297102a26f5-webhook-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:38.801302 master-0 kubenswrapper[19803]: I0313 01:22:38.801201 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-8d675b596-ddtwn_161d2fa6-a541-427a-a3e9-3297102a26f5/multus-admission-controller/0.log" Mar 13 01:22:38.802109 master-0 kubenswrapper[19803]: I0313 01:22:38.801424 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" event={"ID":"161d2fa6-a541-427a-a3e9-3297102a26f5","Type":"ContainerDied","Data":"d285e2cd3ad810bbe2e32e2bf486a60f25f240f9aaa8797930d7581cb9051bc3"} Mar 13 01:22:38.802109 master-0 kubenswrapper[19803]: I0313 01:22:38.801560 19803 scope.go:117] "RemoveContainer" containerID="29a58358b12bdde755e9400ad8a4200dcdb32c73e3b68b4a2a8493087061b74e" Mar 13 01:22:38.802109 master-0 kubenswrapper[19803]: I0313 01:22:38.801678 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-8d675b596-ddtwn" Mar 13 01:22:39.397762 master-0 kubenswrapper[19803]: E0313 01:22:39.397677 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:39.436494 master-0 kubenswrapper[19803]: E0313 01:22:39.436388 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:39.666589 master-0 kubenswrapper[19803]: I0313 01:22:39.664516 19803 scope.go:117] "RemoveContainer" containerID="8f8f696e9a8bf7dc6e42d0e7944725436b3a7019ffcb294c234c413493797ce3" Mar 13 01:22:39.737665 master-0 kubenswrapper[19803]: I0313 01:22:39.737628 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_a6d93d3d-2899-4962-a25a-712e2fb9584b/installer/0.log" Mar 13 01:22:39.737808 master-0 kubenswrapper[19803]: I0313 01:22:39.737725 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:22:39.803918 master-0 kubenswrapper[19803]: I0313 01:22:39.803796 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6d93d3d-2899-4962-a25a-712e2fb9584b-kube-api-access\") pod \"a6d93d3d-2899-4962-a25a-712e2fb9584b\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " Mar 13 01:22:39.803918 master-0 kubenswrapper[19803]: I0313 01:22:39.803880 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-var-lock\") pod \"a6d93d3d-2899-4962-a25a-712e2fb9584b\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " Mar 13 01:22:39.804459 master-0 kubenswrapper[19803]: I0313 01:22:39.804131 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-var-lock" (OuterVolumeSpecName: "var-lock") pod "a6d93d3d-2899-4962-a25a-712e2fb9584b" (UID: "a6d93d3d-2899-4962-a25a-712e2fb9584b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:39.804459 master-0 kubenswrapper[19803]: I0313 01:22:39.804191 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-kubelet-dir\") pod \"a6d93d3d-2899-4962-a25a-712e2fb9584b\" (UID: \"a6d93d3d-2899-4962-a25a-712e2fb9584b\") " Mar 13 01:22:39.804459 master-0 kubenswrapper[19803]: I0313 01:22:39.804345 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "a6d93d3d-2899-4962-a25a-712e2fb9584b" (UID: "a6d93d3d-2899-4962-a25a-712e2fb9584b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:39.804783 master-0 kubenswrapper[19803]: I0313 01:22:39.804720 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:39.804783 master-0 kubenswrapper[19803]: I0313 01:22:39.804758 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/a6d93d3d-2899-4962-a25a-712e2fb9584b-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:39.808921 master-0 kubenswrapper[19803]: I0313 01:22:39.808871 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6d93d3d-2899-4962-a25a-712e2fb9584b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "a6d93d3d-2899-4962-a25a-712e2fb9584b" (UID: "a6d93d3d-2899-4962-a25a-712e2fb9584b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:22:39.812175 master-0 kubenswrapper[19803]: I0313 01:22:39.812134 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_a6d93d3d-2899-4962-a25a-712e2fb9584b/installer/0.log" Mar 13 01:22:39.812357 master-0 kubenswrapper[19803]: I0313 01:22:39.812326 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-master-0" Mar 13 01:22:39.812421 master-0 kubenswrapper[19803]: I0313 01:22:39.812343 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-master-0" event={"ID":"a6d93d3d-2899-4962-a25a-712e2fb9584b","Type":"ContainerDied","Data":"a7fbb77c663751e53bf6fc59da550dafa46cbd13e3be7b8632720b6247664384"} Mar 13 01:22:39.812421 master-0 kubenswrapper[19803]: I0313 01:22:39.812400 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a7fbb77c663751e53bf6fc59da550dafa46cbd13e3be7b8632720b6247664384" Mar 13 01:22:39.817281 master-0 kubenswrapper[19803]: I0313 01:22:39.817245 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 01:22:39.818686 master-0 kubenswrapper[19803]: I0313 01:22:39.818657 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 01:22:39.819673 master-0 kubenswrapper[19803]: I0313 01:22:39.819631 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 01:22:39.820219 master-0 kubenswrapper[19803]: I0313 01:22:39.820190 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 01:22:39.821534 master-0 kubenswrapper[19803]: I0313 01:22:39.821480 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="3b4b0099ff3715076e4da8c307cf4cdf19113ad975d741008a026d470fd6e8de" exitCode=137 Mar 13 01:22:39.821603 master-0 kubenswrapper[19803]: I0313 01:22:39.821586 19803 generic.go:334] "Generic (PLEG): container finished" podID="8e52bef89f4b50e4590a1719bcc5d7e5" containerID="59b81ddf96703b46c61723679f4eccced325378be4bf3ce47532a5cf8c25aff1" exitCode=137 Mar 13 01:22:39.906187 master-0 kubenswrapper[19803]: I0313 01:22:39.906106 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/a6d93d3d-2899-4962-a25a-712e2fb9584b-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:40.403968 master-0 kubenswrapper[19803]: I0313 01:22:40.403891 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 01:22:40.405599 master-0 kubenswrapper[19803]: I0313 01:22:40.405495 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 01:22:40.406679 master-0 kubenswrapper[19803]: I0313 01:22:40.406631 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 01:22:40.407190 master-0 kubenswrapper[19803]: I0313 01:22:40.407147 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 01:22:40.410105 master-0 kubenswrapper[19803]: I0313 01:22:40.410064 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 01:22:40.518430 master-0 kubenswrapper[19803]: I0313 01:22:40.518336 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 01:22:40.518574 master-0 kubenswrapper[19803]: I0313 01:22:40.518469 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 01:22:40.518636 master-0 kubenswrapper[19803]: I0313 01:22:40.518614 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 01:22:40.518703 master-0 kubenswrapper[19803]: I0313 01:22:40.518682 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 01:22:40.518829 master-0 kubenswrapper[19803]: I0313 01:22:40.518706 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin" (OuterVolumeSpecName: "usr-local-bin") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "usr-local-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:40.518829 master-0 kubenswrapper[19803]: I0313 01:22:40.518791 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir" (OuterVolumeSpecName: "static-pod-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "static-pod-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:40.518945 master-0 kubenswrapper[19803]: I0313 01:22:40.518821 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 01:22:40.519002 master-0 kubenswrapper[19803]: I0313 01:22:40.518926 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir" (OuterVolumeSpecName: "log-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:40.519002 master-0 kubenswrapper[19803]: I0313 01:22:40.518939 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir" (OuterVolumeSpecName: "data-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "data-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:40.519152 master-0 kubenswrapper[19803]: I0313 01:22:40.519109 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") pod \"8e52bef89f4b50e4590a1719bcc5d7e5\" (UID: \"8e52bef89f4b50e4590a1719bcc5d7e5\") " Mar 13 01:22:40.519256 master-0 kubenswrapper[19803]: I0313 01:22:40.519229 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:40.519364 master-0 kubenswrapper[19803]: I0313 01:22:40.519276 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "8e52bef89f4b50e4590a1719bcc5d7e5" (UID: "8e52bef89f4b50e4590a1719bcc5d7e5"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:22:40.520085 master-0 kubenswrapper[19803]: I0313 01:22:40.520046 19803 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:40.520085 master-0 kubenswrapper[19803]: I0313 01:22:40.520080 19803 reconciler_common.go:293] "Volume detached for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-usr-local-bin\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:40.520221 master-0 kubenswrapper[19803]: I0313 01:22:40.520105 19803 reconciler_common.go:293] "Volume detached for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-static-pod-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:40.520221 master-0 kubenswrapper[19803]: I0313 01:22:40.520126 19803 reconciler_common.go:293] "Volume detached for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-log-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:40.520221 master-0 kubenswrapper[19803]: I0313 01:22:40.520146 19803 reconciler_common.go:293] "Volume detached for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-data-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:40.520221 master-0 kubenswrapper[19803]: I0313 01:22:40.520165 19803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/8e52bef89f4b50e4590a1719bcc5d7e5-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:22:40.834657 master-0 kubenswrapper[19803]: I0313 01:22:40.834479 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-rev/0.log" Mar 13 01:22:40.836811 master-0 kubenswrapper[19803]: I0313 01:22:40.836759 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd-metrics/0.log" Mar 13 01:22:40.837947 master-0 kubenswrapper[19803]: I0313 01:22:40.837895 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcd/0.log" Mar 13 01:22:40.838812 master-0 kubenswrapper[19803]: I0313 01:22:40.838773 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_8e52bef89f4b50e4590a1719bcc5d7e5/etcdctl/0.log" Mar 13 01:22:40.840603 master-0 kubenswrapper[19803]: I0313 01:22:40.840536 19803 scope.go:117] "RemoveContainer" containerID="d4307a8d99b06baad18f959ac230bad4c2bf7ab603532b53714a7efb8d542993" Mar 13 01:22:40.841026 master-0 kubenswrapper[19803]: I0313 01:22:40.840982 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 01:22:40.866948 master-0 kubenswrapper[19803]: I0313 01:22:40.866904 19803 scope.go:117] "RemoveContainer" containerID="11afe1e82df06ef58f2b34ee7f14cab6582b1c3ebb23e73f966071d3f60bb7d3" Mar 13 01:22:40.903378 master-0 kubenswrapper[19803]: I0313 01:22:40.903326 19803 scope.go:117] "RemoveContainer" containerID="bf41e0708018a7a42a9ea985f7ec3256a3866f84520062060092284abe939c72" Mar 13 01:22:40.928827 master-0 kubenswrapper[19803]: I0313 01:22:40.928778 19803 scope.go:117] "RemoveContainer" containerID="3b4b0099ff3715076e4da8c307cf4cdf19113ad975d741008a026d470fd6e8de" Mar 13 01:22:40.959157 master-0 kubenswrapper[19803]: I0313 01:22:40.959104 19803 scope.go:117] "RemoveContainer" containerID="59b81ddf96703b46c61723679f4eccced325378be4bf3ce47532a5cf8c25aff1" Mar 13 01:22:40.982483 master-0 kubenswrapper[19803]: I0313 01:22:40.982442 19803 scope.go:117] "RemoveContainer" containerID="dc0cc2d6bf9be0a194a0217c205d2ab79cbfb7d5acd7c9e8902600ce17ed4649" Mar 13 01:22:41.005156 master-0 kubenswrapper[19803]: I0313 01:22:41.005128 19803 scope.go:117] "RemoveContainer" containerID="03b6f556b130d09fe1680dbfd846eba4b3a8ef627f216c08cf30ba1c6140ea1c" Mar 13 01:22:41.030023 master-0 kubenswrapper[19803]: I0313 01:22:41.029992 19803 scope.go:117] "RemoveContainer" containerID="4c5b2d8c08ccdfef9dcab32e4f7ca60deac949b04ad9ebcfbb4f605f23b2baeb" Mar 13 01:22:41.849877 master-0 kubenswrapper[19803]: I0313 01:22:41.849793 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-84f57b9877-ffb2n" event={"ID":"cd044580-0236-4ee8-9a26-b8513e400238","Type":"ContainerStarted","Data":"b0cba45d9b5cf91080b592d1d2bfc99f4f460e78c67d86a88125f72fb1635f44"} Mar 13 01:22:41.850756 master-0 kubenswrapper[19803]: I0313 01:22:41.850265 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-84f57b9877-ffb2n" Mar 13 01:22:41.852951 master-0 kubenswrapper[19803]: I0313 01:22:41.852906 19803 patch_prober.go:28] interesting pod/downloads-84f57b9877-ffb2n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" start-of-body= Mar 13 01:22:41.853079 master-0 kubenswrapper[19803]: I0313 01:22:41.852963 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-ffb2n" podUID="cd044580-0236-4ee8-9a26-b8513e400238" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" Mar 13 01:22:42.328657 master-0 kubenswrapper[19803]: I0313 01:22:42.328520 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e52bef89f4b50e4590a1719bcc5d7e5" path="/var/lib/kubelet/pods/8e52bef89f4b50e4590a1719bcc5d7e5/volumes" Mar 13 01:22:42.859184 master-0 kubenswrapper[19803]: I0313 01:22:42.859130 19803 patch_prober.go:28] interesting pod/downloads-84f57b9877-ffb2n container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" start-of-body= Mar 13 01:22:42.860106 master-0 kubenswrapper[19803]: I0313 01:22:42.859195 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-84f57b9877-ffb2n" podUID="cd044580-0236-4ee8-9a26-b8513e400238" containerName="download-server" probeResult="failure" output="Get \"http://10.128.0.89:8080/\": dial tcp 10.128.0.89:8080: connect: connection refused" Mar 13 01:22:43.241781 master-0 kubenswrapper[19803]: E0313 01:22:43.241598 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c4201e656e550 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Killing,Message:Stopping container etcd-metrics,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:09.203987792 +0000 UTC m=+277.169135481,LastTimestamp:2026-03-13 01:22:09.203987792 +0000 UTC m=+277.169135481,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:22:47.209235 master-0 kubenswrapper[19803]: I0313 01:22:47.209154 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-84f57b9877-ffb2n" Mar 13 01:22:49.398798 master-0 kubenswrapper[19803]: E0313 01:22:49.398716 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:49.437411 master-0 kubenswrapper[19803]: E0313 01:22:49.437325 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:51.314453 master-0 kubenswrapper[19803]: I0313 01:22:51.314302 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 01:22:51.351707 master-0 kubenswrapper[19803]: I0313 01:22:51.351630 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:22:51.351707 master-0 kubenswrapper[19803]: I0313 01:22:51.351699 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:22:59.400158 master-0 kubenswrapper[19803]: E0313 01:22:59.400038 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:59.400158 master-0 kubenswrapper[19803]: E0313 01:22:59.400112 19803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 01:22:59.438916 master-0 kubenswrapper[19803]: E0313 01:22:59.438775 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:22:59.438916 master-0 kubenswrapper[19803]: I0313 01:22:59.438892 19803 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 01:23:04.562782 master-0 kubenswrapper[19803]: I0313 01:23:04.562715 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-mcps9_c687237e-50e5-405d-8fef-0efbc3866630/approver/1.log" Mar 13 01:23:04.563883 master-0 kubenswrapper[19803]: I0313 01:23:04.563795 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-mcps9_c687237e-50e5-405d-8fef-0efbc3866630/approver/0.log" Mar 13 01:23:04.564592 master-0 kubenswrapper[19803]: I0313 01:23:04.564440 19803 generic.go:334] "Generic (PLEG): container finished" podID="c687237e-50e5-405d-8fef-0efbc3866630" containerID="b59c177e34d0deb037bbfb6fe7cd23b008e03a59c7d82a89ffa611ae562dbeb4" exitCode=1 Mar 13 01:23:04.564592 master-0 kubenswrapper[19803]: I0313 01:23:04.564486 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-mcps9" event={"ID":"c687237e-50e5-405d-8fef-0efbc3866630","Type":"ContainerDied","Data":"b59c177e34d0deb037bbfb6fe7cd23b008e03a59c7d82a89ffa611ae562dbeb4"} Mar 13 01:23:04.564792 master-0 kubenswrapper[19803]: I0313 01:23:04.564666 19803 scope.go:117] "RemoveContainer" containerID="826ddf0fad5a47b74a9e97796304f54274bf436e1dab02b9917102d0ced785b8" Mar 13 01:23:04.565491 master-0 kubenswrapper[19803]: I0313 01:23:04.565429 19803 scope.go:117] "RemoveContainer" containerID="b59c177e34d0deb037bbfb6fe7cd23b008e03a59c7d82a89ffa611ae562dbeb4" Mar 13 01:23:05.573743 master-0 kubenswrapper[19803]: I0313 01:23:05.573680 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-network-node-identity_network-node-identity-mcps9_c687237e-50e5-405d-8fef-0efbc3866630/approver/1.log" Mar 13 01:23:05.575358 master-0 kubenswrapper[19803]: I0313 01:23:05.575073 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-mcps9" event={"ID":"c687237e-50e5-405d-8fef-0efbc3866630","Type":"ContainerStarted","Data":"857251f2cbb230d687658f97485ad00d28681e50c7991589da4586498417b4af"} Mar 13 01:23:09.439809 master-0 kubenswrapper[19803]: E0313 01:23:09.439681 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Mar 13 01:23:12.322052 master-0 kubenswrapper[19803]: I0313 01:23:12.321765 19803 status_manager.go:851] "Failed to get status for pod" podUID="cd044580-0236-4ee8-9a26-b8513e400238" pod="openshift-console/downloads-84f57b9877-ffb2n" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods downloads-84f57b9877-ffb2n)" Mar 13 01:23:17.245085 master-0 kubenswrapper[19803]: E0313 01:23:17.244894 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c4201e656e4e2 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Killing,Message:Stopping container etcd-readyz,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:09.203987682 +0000 UTC m=+277.169135361,LastTimestamp:2026-03-13 01:22:09.203987682 +0000 UTC m=+277.169135361,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:23:19.613086 master-0 kubenswrapper[19803]: E0313 01:23:19.612661 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:23:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:23:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:23:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:23:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b\\\"],\\\"sizeBytes\\\":2895821940},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3f3a3fc0144fd075212160b467722ab529c42c226d7e87d397f821c8e7df8628\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ec7e570be8cf0476a38d4db98b0455d5b94538b5b7b2ddb3b7d8f12c724c6ddb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284752601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:50fe533376cf6d45ae7e343c58d9c480fb1bc96859ffbbdc51ce2c428de2b653\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd2f5e111c85cdeeff92a61f881c260de30a26d2d9938eef43024e637422abaa\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb\\\"],\\\"sizeBytes\\\":512235767},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88\\\"],\\\"sizeBytes\\\":502712961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:23:19.642546 master-0 kubenswrapper[19803]: E0313 01:23:19.642405 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Mar 13 01:23:25.355400 master-0 kubenswrapper[19803]: E0313 01:23:25.355035 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 01:23:25.356368 master-0 kubenswrapper[19803]: I0313 01:23:25.355740 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-master-0" Mar 13 01:23:25.390098 master-0 kubenswrapper[19803]: W0313 01:23:25.389994 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod29c709c82970b529e7b9b895aa92ef05.slice/crio-cc40190d32c44f4a4d3befdc078b79de072e30c2757f0b38a51a668fa85839a2 WatchSource:0}: Error finding container cc40190d32c44f4a4d3befdc078b79de072e30c2757f0b38a51a668fa85839a2: Status 404 returned error can't find the container with id cc40190d32c44f4a4d3befdc078b79de072e30c2757f0b38a51a668fa85839a2 Mar 13 01:23:25.620373 master-0 kubenswrapper[19803]: E0313 01:23:25.620315 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:23:25.620373 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8" Netns:"/var/run/netns/34ae72a7-1b92-43ef-a173-85f6cea4f90f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:23:25.620373 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:23:25.620373 master-0 kubenswrapper[19803]: > Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: E0313 01:23:25.620407 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8" Netns:"/var/run/netns/34ae72a7-1b92-43ef-a173-85f6cea4f90f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: E0313 01:23:25.620430 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8" Netns:"/var/run/netns/34ae72a7-1b92-43ef-a173-85f6cea4f90f" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:23:25.620884 master-0 kubenswrapper[19803]: E0313 01:23:25.620517 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8\\\" Netns:\\\"/var/run/netns/34ae72a7-1b92-43ef-a173-85f6cea4f90f\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=7e5907e85ded22da6be086bb53ea7b8e2f2094519ebcf5cc9b1a54df93c9e5a8;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" Mar 13 01:23:25.853980 master-0 kubenswrapper[19803]: I0313 01:23:25.853888 19803 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="0c189520a5af8ea48ccd123f2e1fd049c219997892811c4b500390519a7166b2" exitCode=0 Mar 13 01:23:25.853980 master-0 kubenswrapper[19803]: I0313 01:23:25.853967 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"0c189520a5af8ea48ccd123f2e1fd049c219997892811c4b500390519a7166b2"} Mar 13 01:23:25.854339 master-0 kubenswrapper[19803]: I0313 01:23:25.854003 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"cc40190d32c44f4a4d3befdc078b79de072e30c2757f0b38a51a668fa85839a2"} Mar 13 01:23:25.854339 master-0 kubenswrapper[19803]: I0313 01:23:25.854085 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:23:25.854339 master-0 kubenswrapper[19803]: I0313 01:23:25.854279 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:23:25.854339 master-0 kubenswrapper[19803]: I0313 01:23:25.854297 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:23:25.855802 master-0 kubenswrapper[19803]: I0313 01:23:25.854867 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:23:29.614290 master-0 kubenswrapper[19803]: E0313 01:23:29.614169 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": context deadline exceeded" Mar 13 01:23:30.044018 master-0 kubenswrapper[19803]: E0313 01:23:30.043915 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="800ms" Mar 13 01:23:38.432367 master-0 kubenswrapper[19803]: E0313 01:23:38.432308 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:23:38.432367 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23" Netns:"/var/run/netns/ee75a616-eab2-4ba7-af5f-47784d05c825" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:23:38.432367 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:23:38.432367 master-0 kubenswrapper[19803]: > Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: E0313 01:23:38.432391 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23" Netns:"/var/run/netns/ee75a616-eab2-4ba7-af5f-47784d05c825" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: E0313 01:23:38.432416 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23" Netns:"/var/run/netns/ee75a616-eab2-4ba7-af5f-47784d05c825" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:23:38.432868 master-0 kubenswrapper[19803]: E0313 01:23:38.432478 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23\\\" Netns:\\\"/var/run/netns/ee75a616-eab2-4ba7-af5f-47784d05c825\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=16546ae0dbf68f6b6749f1515b75a98f66fb49d26eef9d09e33ec2704d43de23;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" Mar 13 01:23:38.964891 master-0 kubenswrapper[19803]: I0313 01:23:38.964770 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:23:38.965921 master-0 kubenswrapper[19803]: I0313 01:23:38.965881 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:23:39.615631 master-0 kubenswrapper[19803]: E0313 01:23:39.614630 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:23:40.845122 master-0 kubenswrapper[19803]: E0313 01:23:40.844978 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Mar 13 01:23:48.865335 master-0 kubenswrapper[19803]: I0313 01:23:48.865203 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:23:48.866442 master-0 kubenswrapper[19803]: E0313 01:23:48.865469 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:23:48.866442 master-0 kubenswrapper[19803]: I0313 01:23:48.865549 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:23:48.866442 master-0 kubenswrapper[19803]: E0313 01:23:48.865562 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:23:48.866442 master-0 kubenswrapper[19803]: E0313 01:23:48.865731 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:23:48.866442 master-0 kubenswrapper[19803]: E0313 01:23:48.865786 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:23:48.866442 master-0 kubenswrapper[19803]: E0313 01:23:48.865849 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:25:50.865812916 +0000 UTC m=+498.830960635 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:23:48.866442 master-0 kubenswrapper[19803]: E0313 01:23:48.865897 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:25:50.865878128 +0000 UTC m=+498.831025847 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:23:49.616328 master-0 kubenswrapper[19803]: E0313 01:23:49.616097 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:23:51.109904 master-0 kubenswrapper[19803]: I0313 01:23:51.109803 19803 generic.go:334] "Generic (PLEG): container finished" podID="8ad2a6d5-6edf-4840-89f9-47847c8dac05" containerID="e02d6b0ebe17533096e975a2adacfc3a6fe4916c67a536db59280d4d4877a458" exitCode=0 Mar 13 01:23:51.109904 master-0 kubenswrapper[19803]: I0313 01:23:51.109881 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" event={"ID":"8ad2a6d5-6edf-4840-89f9-47847c8dac05","Type":"ContainerDied","Data":"e02d6b0ebe17533096e975a2adacfc3a6fe4916c67a536db59280d4d4877a458"} Mar 13 01:23:51.111017 master-0 kubenswrapper[19803]: I0313 01:23:51.109937 19803 scope.go:117] "RemoveContainer" containerID="94468d369b5f43adf08abc9d6a6230238254bef0eb81d4e6a3d5e925f29bcc13" Mar 13 01:23:51.111017 master-0 kubenswrapper[19803]: I0313 01:23:51.110614 19803 scope.go:117] "RemoveContainer" containerID="e02d6b0ebe17533096e975a2adacfc3a6fe4916c67a536db59280d4d4877a458" Mar 13 01:23:51.248760 master-0 kubenswrapper[19803]: E0313 01:23:51.248561 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{etcd-master-0.189c4201e65794f1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-master-0,UID:8e52bef89f4b50e4590a1719bcc5d7e5,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Killing,Message:Stopping container etcd,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:09.204032753 +0000 UTC m=+277.169180432,LastTimestamp:2026-03-13 01:22:09.204032753 +0000 UTC m=+277.169180432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:23:52.125707 master-0 kubenswrapper[19803]: I0313 01:23:52.125601 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" event={"ID":"8ad2a6d5-6edf-4840-89f9-47847c8dac05","Type":"ContainerStarted","Data":"e44107a25da30bbb6da688d5e6475e0b44afaa2dfa67146d683bd6e91bb8b3aa"} Mar 13 01:23:52.126834 master-0 kubenswrapper[19803]: I0313 01:23:52.126053 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:23:52.129085 master-0 kubenswrapper[19803]: I0313 01:23:52.129016 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-64bf9778cb-bx29h" Mar 13 01:23:52.446507 master-0 kubenswrapper[19803]: E0313 01:23:52.446342 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Mar 13 01:23:59.617098 master-0 kubenswrapper[19803]: E0313 01:23:59.617021 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:23:59.618660 master-0 kubenswrapper[19803]: E0313 01:23:59.617739 19803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 01:23:59.856623 master-0 kubenswrapper[19803]: E0313 01:23:59.856498 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 01:24:00.198059 master-0 kubenswrapper[19803]: I0313 01:24:00.196092 19803 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="29572c5e6e56295b40629191dedd0373d254bde804addb046737d389a80baa01" exitCode=0 Mar 13 01:24:00.198059 master-0 kubenswrapper[19803]: I0313 01:24:00.196158 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"29572c5e6e56295b40629191dedd0373d254bde804addb046737d389a80baa01"} Mar 13 01:24:00.198059 master-0 kubenswrapper[19803]: I0313 01:24:00.196594 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:24:00.198059 master-0 kubenswrapper[19803]: I0313 01:24:00.196617 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:24:01.209015 master-0 kubenswrapper[19803]: I0313 01:24:01.208933 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/cluster-cloud-controller-manager/0.log" Mar 13 01:24:01.209015 master-0 kubenswrapper[19803]: I0313 01:24:01.209017 19803 generic.go:334] "Generic (PLEG): container finished" podID="80eb89dc-ccfc-4360-811a-82a3ef6f7b65" containerID="1424e782d7d010eb17f5faeba062e24f9a0ac4b5291d10741b6ebae4bf0fcb9b" exitCode=1 Mar 13 01:24:01.211368 master-0 kubenswrapper[19803]: I0313 01:24:01.209063 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerDied","Data":"1424e782d7d010eb17f5faeba062e24f9a0ac4b5291d10741b6ebae4bf0fcb9b"} Mar 13 01:24:01.211368 master-0 kubenswrapper[19803]: I0313 01:24:01.209751 19803 scope.go:117] "RemoveContainer" containerID="1424e782d7d010eb17f5faeba062e24f9a0ac4b5291d10741b6ebae4bf0fcb9b" Mar 13 01:24:02.223089 master-0 kubenswrapper[19803]: I0313 01:24:02.223038 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/cluster-cloud-controller-manager/0.log" Mar 13 01:24:02.224164 master-0 kubenswrapper[19803]: I0313 01:24:02.224118 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerStarted","Data":"a577511f8b3ef673c5fbe124068cf0fb8244ae561be3e1fb98acb2835524ff81"} Mar 13 01:24:05.647139 master-0 kubenswrapper[19803]: E0313 01:24:05.647053 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="6.4s" Mar 13 01:24:07.269139 master-0 kubenswrapper[19803]: I0313 01:24:07.268991 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/config-sync-controllers/0.log" Mar 13 01:24:07.271085 master-0 kubenswrapper[19803]: I0313 01:24:07.271014 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/cluster-cloud-controller-manager/0.log" Mar 13 01:24:07.271185 master-0 kubenswrapper[19803]: I0313 01:24:07.271122 19803 generic.go:334] "Generic (PLEG): container finished" podID="80eb89dc-ccfc-4360-811a-82a3ef6f7b65" containerID="32e6aea9a2d0b5bfb5397a8b0d83b4b7864301a451107157a16f24b685af041a" exitCode=1 Mar 13 01:24:07.271245 master-0 kubenswrapper[19803]: I0313 01:24:07.271183 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerDied","Data":"32e6aea9a2d0b5bfb5397a8b0d83b4b7864301a451107157a16f24b685af041a"} Mar 13 01:24:07.272133 master-0 kubenswrapper[19803]: I0313 01:24:07.272085 19803 scope.go:117] "RemoveContainer" containerID="32e6aea9a2d0b5bfb5397a8b0d83b4b7864301a451107157a16f24b685af041a" Mar 13 01:24:08.292197 master-0 kubenswrapper[19803]: I0313 01:24:08.292047 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/config-sync-controllers/0.log" Mar 13 01:24:08.293680 master-0 kubenswrapper[19803]: I0313 01:24:08.293590 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/cluster-cloud-controller-manager/0.log" Mar 13 01:24:08.293824 master-0 kubenswrapper[19803]: I0313 01:24:08.293751 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cloud-controller-manager-operator/cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8" event={"ID":"80eb89dc-ccfc-4360-811a-82a3ef6f7b65","Type":"ContainerStarted","Data":"10c3650b5882eaa7713ad1a24beaa1783a9e07434766f8c98f451e658d0718dd"} Mar 13 01:24:11.320962 master-0 kubenswrapper[19803]: I0313 01:24:11.320769 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/1.log" Mar 13 01:24:11.321909 master-0 kubenswrapper[19803]: I0313 01:24:11.321735 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/0.log" Mar 13 01:24:11.321909 master-0 kubenswrapper[19803]: I0313 01:24:11.321834 19803 generic.go:334] "Generic (PLEG): container finished" podID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" containerID="28e210a816437ccb443c8d6a143794ae992a561c368c609a20f38e48757f3d85" exitCode=1 Mar 13 01:24:11.321909 master-0 kubenswrapper[19803]: I0313 01:24:11.321892 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerDied","Data":"28e210a816437ccb443c8d6a143794ae992a561c368c609a20f38e48757f3d85"} Mar 13 01:24:11.322106 master-0 kubenswrapper[19803]: I0313 01:24:11.322050 19803 scope.go:117] "RemoveContainer" containerID="743c555e1cf0c98c73695ed678affcb2226d9582a12dd77e2de535512f78c66d" Mar 13 01:24:11.322607 master-0 kubenswrapper[19803]: I0313 01:24:11.322489 19803 scope.go:117] "RemoveContainer" containerID="28e210a816437ccb443c8d6a143794ae992a561c368c609a20f38e48757f3d85" Mar 13 01:24:12.336005 master-0 kubenswrapper[19803]: I0313 01:24:12.335913 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/1.log" Mar 13 01:24:12.337065 master-0 kubenswrapper[19803]: I0313 01:24:12.336050 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerStarted","Data":"4b5aa7e3efa2c3abf973ba4aa1c880d4533ee27b6bf1af0cbd00347580cc6d9b"} Mar 13 01:24:12.339946 master-0 kubenswrapper[19803]: I0313 01:24:12.339840 19803 status_manager.go:851] "Failed to get status for pod" podUID="24e04786030519cf5fd9f600ea6710e9" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-controller-manager-master-0)" Mar 13 01:24:14.354810 master-0 kubenswrapper[19803]: I0313 01:24:14.354735 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/1.log" Mar 13 01:24:14.356109 master-0 kubenswrapper[19803]: I0313 01:24:14.356051 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/0.log" Mar 13 01:24:14.356898 master-0 kubenswrapper[19803]: I0313 01:24:14.356851 19803 generic.go:334] "Generic (PLEG): container finished" podID="81835d51-a414-440f-889b-690561e98d6a" containerID="5a44ac8efb09ea69fddd87bdea34d5c96b816c25b6e79670f14b1432f959ff9a" exitCode=1 Mar 13 01:24:14.357022 master-0 kubenswrapper[19803]: I0313 01:24:14.356939 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" event={"ID":"81835d51-a414-440f-889b-690561e98d6a","Type":"ContainerDied","Data":"5a44ac8efb09ea69fddd87bdea34d5c96b816c25b6e79670f14b1432f959ff9a"} Mar 13 01:24:14.357166 master-0 kubenswrapper[19803]: I0313 01:24:14.357149 19803 scope.go:117] "RemoveContainer" containerID="e9eb86bc8639ac87892dc75bde4aa22bd6e683c301d4d69ac50acf0d02a2db39" Mar 13 01:24:14.357861 master-0 kubenswrapper[19803]: I0313 01:24:14.357825 19803 scope.go:117] "RemoveContainer" containerID="5a44ac8efb09ea69fddd87bdea34d5c96b816c25b6e79670f14b1432f959ff9a" Mar 13 01:24:15.370332 master-0 kubenswrapper[19803]: I0313 01:24:15.370256 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/1.log" Mar 13 01:24:15.371643 master-0 kubenswrapper[19803]: I0313 01:24:15.371578 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" event={"ID":"81835d51-a414-440f-889b-690561e98d6a","Type":"ContainerStarted","Data":"2f4adb323ac7738fc856922f5c3a45084a462e73aac1393a1ab7e3a01c7b2f5c"} Mar 13 01:24:15.372168 master-0 kubenswrapper[19803]: I0313 01:24:15.372114 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:24:19.869549 master-0 kubenswrapper[19803]: E0313 01:24:19.869247 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:24:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:24:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:24:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:24:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b\\\"],\\\"sizeBytes\\\":2895821940},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3f3a3fc0144fd075212160b467722ab529c42c226d7e87d397f821c8e7df8628\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ec7e570be8cf0476a38d4db98b0455d5b94538b5b7b2ddb3b7d8f12c724c6ddb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284752601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:50fe533376cf6d45ae7e343c58d9c480fb1bc96859ffbbdc51ce2c428de2b653\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd2f5e111c85cdeeff92a61f881c260de30a26d2d9938eef43024e637422abaa\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb\\\"],\\\"sizeBytes\\\":512235767},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88\\\"],\\\"sizeBytes\\\":502712961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:24:21.372689 master-0 kubenswrapper[19803]: I0313 01:24:21.372554 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-catalogd/catalogd-controller-manager-7f8b8b6f4c-z4qvz" Mar 13 01:24:22.048022 master-0 kubenswrapper[19803]: E0313 01:24:22.047869 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 13 01:24:22.447869 master-0 kubenswrapper[19803]: I0313 01:24:22.447601 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-n4252_07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/manager/1.log" Mar 13 01:24:22.449608 master-0 kubenswrapper[19803]: I0313 01:24:22.449487 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-n4252_07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/manager/0.log" Mar 13 01:24:22.449891 master-0 kubenswrapper[19803]: I0313 01:24:22.449676 19803 generic.go:334] "Generic (PLEG): container finished" podID="07894508-4e56-48d4-ab3c-4ab8f4ea2e7e" containerID="b54252f16f5fb3f714b95f360cc3679cec5204f01eb6fa38a3bb6001419c1a68" exitCode=1 Mar 13 01:24:22.449891 master-0 kubenswrapper[19803]: I0313 01:24:22.449798 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" event={"ID":"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e","Type":"ContainerDied","Data":"b54252f16f5fb3f714b95f360cc3679cec5204f01eb6fa38a3bb6001419c1a68"} Mar 13 01:24:22.450069 master-0 kubenswrapper[19803]: I0313 01:24:22.449912 19803 scope.go:117] "RemoveContainer" containerID="fd379745af9da3dead649206438373348f4ca6dba57dff1deac4d0df35fc6fc1" Mar 13 01:24:22.451242 master-0 kubenswrapper[19803]: I0313 01:24:22.451166 19803 scope.go:117] "RemoveContainer" containerID="b54252f16f5fb3f714b95f360cc3679cec5204f01eb6fa38a3bb6001419c1a68" Mar 13 01:24:23.463159 master-0 kubenswrapper[19803]: I0313 01:24:23.463081 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-controller_operator-controller-controller-manager-6598bfb6c4-n4252_07894508-4e56-48d4-ab3c-4ab8f4ea2e7e/manager/1.log" Mar 13 01:24:23.463995 master-0 kubenswrapper[19803]: I0313 01:24:23.463872 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" event={"ID":"07894508-4e56-48d4-ab3c-4ab8f4ea2e7e","Type":"ContainerStarted","Data":"66a4037cd6afb7be7b3a4e4eaf68f448942214f307ba4e1835017c6b0beb50f3"} Mar 13 01:24:23.464271 master-0 kubenswrapper[19803]: I0313 01:24:23.464216 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:24:25.252484 master-0 kubenswrapper[19803]: E0313 01:24:25.252261 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-thhrl.189c41ffcd1b2b8d openshift-multus 12717 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-thhrl,UID:4626655d-add4-4cbd-9ba7-7082f63db442,APIVersion:v1,ResourceVersion:12650,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:00 +0000 UTC,LastTimestamp:2026-03-13 01:22:10.187201488 +0000 UTC m=+278.152349167,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:24:26.622348 master-0 kubenswrapper[19803]: E0313 01:24:26.622228 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:24:26.622348 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193" Netns:"/var/run/netns/56a8d6e8-b5c1-4ec6-8c97-78ea053e6e6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:24:26.622348 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:24:26.622348 master-0 kubenswrapper[19803]: > Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: E0313 01:24:26.622384 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193" Netns:"/var/run/netns/56a8d6e8-b5c1-4ec6-8c97-78ea053e6e6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: E0313 01:24:26.622423 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193" Netns:"/var/run/netns/56a8d6e8-b5c1-4ec6-8c97-78ea053e6e6c" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:24:26.623594 master-0 kubenswrapper[19803]: E0313 01:24:26.622687 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193\\\" Netns:\\\"/var/run/netns/56a8d6e8-b5c1-4ec6-8c97-78ea053e6e6c\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=accdba56e7db4a17713894ea04e66a09978912757448f9c1372734d5ac98d193;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" Mar 13 01:24:27.501683 master-0 kubenswrapper[19803]: I0313 01:24:27.501547 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:24:27.502958 master-0 kubenswrapper[19803]: I0313 01:24:27.502899 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:24:29.870696 master-0 kubenswrapper[19803]: E0313 01:24:29.870614 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:24:30.541820 master-0 kubenswrapper[19803]: I0313 01:24:30.541691 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/0.log" Mar 13 01:24:30.542313 master-0 kubenswrapper[19803]: I0313 01:24:30.541861 19803 generic.go:334] "Generic (PLEG): container finished" podID="21110b48-25fc-434a-b156-7f6bd6064bed" containerID="0cfdb95efdc8432bdd4633516711c41c3cb5e31aacb0fb3f7ab64226c6ff685f" exitCode=1 Mar 13 01:24:30.542313 master-0 kubenswrapper[19803]: I0313 01:24:30.542003 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerDied","Data":"0cfdb95efdc8432bdd4633516711c41c3cb5e31aacb0fb3f7ab64226c6ff685f"} Mar 13 01:24:30.542786 master-0 kubenswrapper[19803]: I0313 01:24:30.542734 19803 scope.go:117] "RemoveContainer" containerID="0cfdb95efdc8432bdd4633516711c41c3cb5e31aacb0fb3f7ab64226c6ff685f" Mar 13 01:24:30.544957 master-0 kubenswrapper[19803]: I0313 01:24:30.544894 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6_56e20b21-ba17-46ae-a740-0e7bd45eae5f/control-plane-machine-set-operator/0.log" Mar 13 01:24:30.545054 master-0 kubenswrapper[19803]: I0313 01:24:30.544982 19803 generic.go:334] "Generic (PLEG): container finished" podID="56e20b21-ba17-46ae-a740-0e7bd45eae5f" containerID="08915b60146d98d7efb6d41a6c922970c9b802ffad2670270c869858e2667b72" exitCode=1 Mar 13 01:24:30.545054 master-0 kubenswrapper[19803]: I0313 01:24:30.545035 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" event={"ID":"56e20b21-ba17-46ae-a740-0e7bd45eae5f","Type":"ContainerDied","Data":"08915b60146d98d7efb6d41a6c922970c9b802ffad2670270c869858e2667b72"} Mar 13 01:24:30.545853 master-0 kubenswrapper[19803]: I0313 01:24:30.545785 19803 scope.go:117] "RemoveContainer" containerID="08915b60146d98d7efb6d41a6c922970c9b802ffad2670270c869858e2667b72" Mar 13 01:24:31.559485 master-0 kubenswrapper[19803]: I0313 01:24:31.559362 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6_56e20b21-ba17-46ae-a740-0e7bd45eae5f/control-plane-machine-set-operator/0.log" Mar 13 01:24:31.560555 master-0 kubenswrapper[19803]: I0313 01:24:31.559599 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-6686554ddc-pmrq6" event={"ID":"56e20b21-ba17-46ae-a740-0e7bd45eae5f","Type":"ContainerStarted","Data":"652ce6531422d3429777a0b899c69f03e01f47f9624743d2f0a9d8ad7b68e45b"} Mar 13 01:24:31.564216 master-0 kubenswrapper[19803]: I0313 01:24:31.564161 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/0.log" Mar 13 01:24:31.564358 master-0 kubenswrapper[19803]: I0313 01:24:31.564237 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerStarted","Data":"a06159b44add11f1f640a62febc09ee506ed8ed487eaccded51fcfacb9d58f8c"} Mar 13 01:24:34.199461 master-0 kubenswrapper[19803]: E0313 01:24:34.199381 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 01:24:34.601229 master-0 kubenswrapper[19803]: I0313 01:24:34.601094 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"eb9dd669711fe74caa34b712c9073951a0afc392d51790a81b31f49fbd6b516e"} Mar 13 01:24:34.601940 master-0 kubenswrapper[19803]: I0313 01:24:34.601867 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:24:34.601940 master-0 kubenswrapper[19803]: I0313 01:24:34.601916 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:24:35.616809 master-0 kubenswrapper[19803]: I0313 01:24:35.616697 19803 generic.go:334] "Generic (PLEG): container finished" podID="29c709c82970b529e7b9b895aa92ef05" containerID="eb9dd669711fe74caa34b712c9073951a0afc392d51790a81b31f49fbd6b516e" exitCode=0 Mar 13 01:24:35.616809 master-0 kubenswrapper[19803]: I0313 01:24:35.616773 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerDied","Data":"eb9dd669711fe74caa34b712c9073951a0afc392d51790a81b31f49fbd6b516e"} Mar 13 01:24:36.957013 master-0 kubenswrapper[19803]: I0313 01:24:36.956909 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-controller/operator-controller-controller-manager-6598bfb6c4-n4252" Mar 13 01:24:39.049732 master-0 kubenswrapper[19803]: E0313 01:24:39.049469 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:24:39.853273 master-0 kubenswrapper[19803]: E0313 01:24:39.853210 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:24:39.853273 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768" Netns:"/var/run/netns/99c0a0ff-5fad-4349-ac9e-12ba81ebd919" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:24:39.853273 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:24:39.853273 master-0 kubenswrapper[19803]: > Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: E0313 01:24:39.853301 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768" Netns:"/var/run/netns/99c0a0ff-5fad-4349-ac9e-12ba81ebd919" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: E0313 01:24:39.853324 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768" Netns:"/var/run/netns/99c0a0ff-5fad-4349-ac9e-12ba81ebd919" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:24:39.853454 master-0 kubenswrapper[19803]: E0313 01:24:39.853387 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768\\\" Netns:\\\"/var/run/netns/99c0a0ff-5fad-4349-ac9e-12ba81ebd919\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=28b4723ff8604930a57440dab7889cd322f367718469de9df0f7a4e7944e0768;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" Mar 13 01:24:39.872000 master-0 kubenswrapper[19803]: E0313 01:24:39.871908 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:24:40.192878 master-0 kubenswrapper[19803]: I0313 01:24:40.192644 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Readiness probe status=failure output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" start-of-body= Mar 13 01:24:40.192878 master-0 kubenswrapper[19803]: I0313 01:24:40.192743 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": dial tcp [::1]:10357: connect: connection refused" Mar 13 01:24:40.671320 master-0 kubenswrapper[19803]: I0313 01:24:40.671223 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:24:40.671320 master-0 kubenswrapper[19803]: I0313 01:24:40.671319 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="b1c12809753fc2546fb8e821c8e7f6bbad80bd3bc2111cc6731d186681cf0988" exitCode=0 Mar 13 01:24:40.672144 master-0 kubenswrapper[19803]: I0313 01:24:40.671417 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerDied","Data":"b1c12809753fc2546fb8e821c8e7f6bbad80bd3bc2111cc6731d186681cf0988"} Mar 13 01:24:40.672640 master-0 kubenswrapper[19803]: I0313 01:24:40.672585 19803 scope.go:117] "RemoveContainer" containerID="b1c12809753fc2546fb8e821c8e7f6bbad80bd3bc2111cc6731d186681cf0988" Mar 13 01:24:40.677145 master-0 kubenswrapper[19803]: I0313 01:24:40.677096 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 13 01:24:40.678214 master-0 kubenswrapper[19803]: I0313 01:24:40.677867 19803 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="36b85103aab608e07fe57ad44e030eaf64a6694fa43ef8b29c17a2a587b80411" exitCode=1 Mar 13 01:24:40.678214 master-0 kubenswrapper[19803]: I0313 01:24:40.677964 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"36b85103aab608e07fe57ad44e030eaf64a6694fa43ef8b29c17a2a587b80411"} Mar 13 01:24:40.678214 master-0 kubenswrapper[19803]: I0313 01:24:40.678038 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:24:40.679099 master-0 kubenswrapper[19803]: I0313 01:24:40.679004 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:24:40.679376 master-0 kubenswrapper[19803]: I0313 01:24:40.679311 19803 scope.go:117] "RemoveContainer" containerID="36b85103aab608e07fe57ad44e030eaf64a6694fa43ef8b29c17a2a587b80411" Mar 13 01:24:41.479419 master-0 kubenswrapper[19803]: I0313 01:24:41.479338 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:24:41.694440 master-0 kubenswrapper[19803]: I0313 01:24:41.694353 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 13 01:24:41.695225 master-0 kubenswrapper[19803]: I0313 01:24:41.695152 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"8758f285d02298f3f87cf8a95d69a9b9fc7adb315bfb680293d79f27940394d1"} Mar 13 01:24:41.696945 master-0 kubenswrapper[19803]: I0313 01:24:41.696895 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:24:41.702330 master-0 kubenswrapper[19803]: I0313 01:24:41.702234 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/2.log" Mar 13 01:24:41.703065 master-0 kubenswrapper[19803]: I0313 01:24:41.703024 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/1.log" Mar 13 01:24:41.703127 master-0 kubenswrapper[19803]: I0313 01:24:41.703094 19803 generic.go:334] "Generic (PLEG): container finished" podID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" containerID="4b5aa7e3efa2c3abf973ba4aa1c880d4533ee27b6bf1af0cbd00347580cc6d9b" exitCode=1 Mar 13 01:24:41.703239 master-0 kubenswrapper[19803]: I0313 01:24:41.703193 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerDied","Data":"4b5aa7e3efa2c3abf973ba4aa1c880d4533ee27b6bf1af0cbd00347580cc6d9b"} Mar 13 01:24:41.703311 master-0 kubenswrapper[19803]: I0313 01:24:41.703269 19803 scope.go:117] "RemoveContainer" containerID="28e210a816437ccb443c8d6a143794ae992a561c368c609a20f38e48757f3d85" Mar 13 01:24:41.704238 master-0 kubenswrapper[19803]: I0313 01:24:41.704197 19803 scope.go:117] "RemoveContainer" containerID="4b5aa7e3efa2c3abf973ba4aa1c880d4533ee27b6bf1af0cbd00347580cc6d9b" Mar 13 01:24:41.704651 master-0 kubenswrapper[19803]: E0313 01:24:41.704608 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-bj5ld_openshift-cluster-storage-operator(0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" podUID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" Mar 13 01:24:41.712076 master-0 kubenswrapper[19803]: I0313 01:24:41.712031 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:24:41.712148 master-0 kubenswrapper[19803]: I0313 01:24:41.712112 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"1ddb833fd5fea377be553b539334e827a1b9d7511648f7ab96694e420bb21512"} Mar 13 01:24:42.726130 master-0 kubenswrapper[19803]: I0313 01:24:42.726020 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/2.log" Mar 13 01:24:43.740028 master-0 kubenswrapper[19803]: I0313 01:24:43.739938 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-cp77c_7e938267-de1f-46f7-bf78-b0b3e810c4fa/machine-approver-controller/0.log" Mar 13 01:24:43.741253 master-0 kubenswrapper[19803]: I0313 01:24:43.741165 19803 generic.go:334] "Generic (PLEG): container finished" podID="7e938267-de1f-46f7-bf78-b0b3e810c4fa" containerID="bae7a737a9916bf6e75a9e64bc9870fd746bebbcde61cffd2159ed594dff080d" exitCode=255 Mar 13 01:24:43.741374 master-0 kubenswrapper[19803]: I0313 01:24:43.741257 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" event={"ID":"7e938267-de1f-46f7-bf78-b0b3e810c4fa","Type":"ContainerDied","Data":"bae7a737a9916bf6e75a9e64bc9870fd746bebbcde61cffd2159ed594dff080d"} Mar 13 01:24:43.742581 master-0 kubenswrapper[19803]: I0313 01:24:43.742491 19803 scope.go:117] "RemoveContainer" containerID="bae7a737a9916bf6e75a9e64bc9870fd746bebbcde61cffd2159ed594dff080d" Mar 13 01:24:44.751227 master-0 kubenswrapper[19803]: I0313 01:24:44.751168 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-machine-approver_machine-approver-754bdc9f9d-cp77c_7e938267-de1f-46f7-bf78-b0b3e810c4fa/machine-approver-controller/0.log" Mar 13 01:24:44.751811 master-0 kubenswrapper[19803]: I0313 01:24:44.751700 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-754bdc9f9d-cp77c" event={"ID":"7e938267-de1f-46f7-bf78-b0b3e810c4fa","Type":"ContainerStarted","Data":"19482d33fd3950948e543fb1e196c08f1bfc2357677662e62fc6e252dc8e2e49"} Mar 13 01:24:45.765956 master-0 kubenswrapper[19803]: I0313 01:24:45.765862 19803 generic.go:334] "Generic (PLEG): container finished" podID="8c377a67-e763-4925-afae-a7f8546a369b" containerID="3823a1546dde2a6cc4ddf8e1b66df5b62407e5907786e28efbf8762481ad427e" exitCode=0 Mar 13 01:24:45.767148 master-0 kubenswrapper[19803]: I0313 01:24:45.765939 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" event={"ID":"8c377a67-e763-4925-afae-a7f8546a369b","Type":"ContainerDied","Data":"3823a1546dde2a6cc4ddf8e1b66df5b62407e5907786e28efbf8762481ad427e"} Mar 13 01:24:45.767148 master-0 kubenswrapper[19803]: I0313 01:24:45.766083 19803 scope.go:117] "RemoveContainer" containerID="7e4809732e6f42f6e1aaeab2220c5d3d3098fc28ea26ac8cc73446ea1b10cd93" Mar 13 01:24:45.767148 master-0 kubenswrapper[19803]: I0313 01:24:45.767076 19803 scope.go:117] "RemoveContainer" containerID="3823a1546dde2a6cc4ddf8e1b66df5b62407e5907786e28efbf8762481ad427e" Mar 13 01:24:46.776943 master-0 kubenswrapper[19803]: I0313 01:24:46.776847 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-66b55d57d-d6gzp" event={"ID":"8c377a67-e763-4925-afae-a7f8546a369b","Type":"ContainerStarted","Data":"9c224e98a08a7fa4151f14e3d995cf41ce17db8af6d3226fe814d8bb10981cd3"} Mar 13 01:24:47.790683 master-0 kubenswrapper[19803]: I0313 01:24:47.790619 19803 generic.go:334] "Generic (PLEG): container finished" podID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerID="2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0" exitCode=0 Mar 13 01:24:47.791659 master-0 kubenswrapper[19803]: I0313 01:24:47.790734 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" event={"ID":"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7","Type":"ContainerDied","Data":"2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0"} Mar 13 01:24:47.793310 master-0 kubenswrapper[19803]: I0313 01:24:47.793284 19803 scope.go:117] "RemoveContainer" containerID="2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0" Mar 13 01:24:48.806232 master-0 kubenswrapper[19803]: I0313 01:24:48.806119 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" event={"ID":"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7","Type":"ContainerStarted","Data":"797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681"} Mar 13 01:24:48.807319 master-0 kubenswrapper[19803]: I0313 01:24:48.806629 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:24:48.813625 master-0 kubenswrapper[19803]: I0313 01:24:48.813488 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:24:49.873362 master-0 kubenswrapper[19803]: E0313 01:24:49.873223 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:24:50.191390 master-0 kubenswrapper[19803]: I0313 01:24:50.191053 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:24:51.480376 master-0 kubenswrapper[19803]: I0313 01:24:51.480275 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:24:52.314990 master-0 kubenswrapper[19803]: I0313 01:24:52.314901 19803 scope.go:117] "RemoveContainer" containerID="4b5aa7e3efa2c3abf973ba4aa1c880d4533ee27b6bf1af0cbd00347580cc6d9b" Mar 13 01:24:52.855895 master-0 kubenswrapper[19803]: I0313 01:24:52.855830 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/2.log" Mar 13 01:24:52.857156 master-0 kubenswrapper[19803]: I0313 01:24:52.857096 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerStarted","Data":"220c8a7eaa1c73476fd723825df13cbd502b3de33685d69020081dc52ff7896b"} Mar 13 01:24:54.481391 master-0 kubenswrapper[19803]: I0313 01:24:54.481220 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:24:54.482383 master-0 kubenswrapper[19803]: I0313 01:24:54.481402 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:24:56.050874 master-0 kubenswrapper[19803]: E0313 01:24:56.050730 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 13 01:24:59.258207 master-0 kubenswrapper[19803]: E0313 01:24:59.257907 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{cni-sysctl-allowlist-ds-thhrl.189c41ffcd1b2b8d openshift-multus 12717 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-multus,Name:cni-sysctl-allowlist-ds-thhrl,UID:4626655d-add4-4cbd-9ba7-7082f63db442,APIVersion:v1,ResourceVersion:12650,FieldPath:spec.containers{kube-multus-additional-cni-plugins},},Reason:Unhealthy,Message:Readiness probe errored: rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:00 +0000 UTC,LastTimestamp:2026-03-13 01:22:20.185087354 +0000 UTC m=+288.150235033,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:24:59.873688 master-0 kubenswrapper[19803]: E0313 01:24:59.873607 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:24:59.874055 master-0 kubenswrapper[19803]: E0313 01:24:59.874010 19803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 01:25:04.480317 master-0 kubenswrapper[19803]: I0313 01:25:04.480190 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:25:04.480317 master-0 kubenswrapper[19803]: I0313 01:25:04.480306 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:08.609390 master-0 kubenswrapper[19803]: E0313 01:25:08.609311 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 01:25:09.025343 master-0 kubenswrapper[19803]: I0313 01:25:09.025300 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:25:09.025703 master-0 kubenswrapper[19803]: I0313 01:25:09.025691 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:25:11.587255 master-0 kubenswrapper[19803]: I0313 01:25:11.587150 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:53686->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 13 01:25:11.588621 master-0 kubenswrapper[19803]: I0313 01:25:11.587260 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:53686->127.0.0.1:10357: read: connection reset by peer" Mar 13 01:25:11.588621 master-0 kubenswrapper[19803]: I0313 01:25:11.587354 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:25:11.589342 master-0 kubenswrapper[19803]: I0313 01:25:11.588633 19803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"1ddb833fd5fea377be553b539334e827a1b9d7511648f7ab96694e420bb21512"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 13 01:25:11.589342 master-0 kubenswrapper[19803]: I0313 01:25:11.588802 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" containerID="cri-o://1ddb833fd5fea377be553b539334e827a1b9d7511648f7ab96694e420bb21512" gracePeriod=30 Mar 13 01:25:12.059720 master-0 kubenswrapper[19803]: I0313 01:25:12.059623 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/1.log" Mar 13 01:25:12.063552 master-0 kubenswrapper[19803]: I0313 01:25:12.063350 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:25:12.063552 master-0 kubenswrapper[19803]: I0313 01:25:12.063450 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="1ddb833fd5fea377be553b539334e827a1b9d7511648f7ab96694e420bb21512" exitCode=255 Mar 13 01:25:12.063710 master-0 kubenswrapper[19803]: I0313 01:25:12.063574 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerDied","Data":"1ddb833fd5fea377be553b539334e827a1b9d7511648f7ab96694e420bb21512"} Mar 13 01:25:12.063710 master-0 kubenswrapper[19803]: I0313 01:25:12.063645 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"70a2f08e906dc96f6b52818590dddea1c35c051fc9dc0614447379a9118e65f9"} Mar 13 01:25:12.063710 master-0 kubenswrapper[19803]: I0313 01:25:12.063679 19803 scope.go:117] "RemoveContainer" containerID="b1c12809753fc2546fb8e821c8e7f6bbad80bd3bc2111cc6731d186681cf0988" Mar 13 01:25:12.342200 master-0 kubenswrapper[19803]: I0313 01:25:12.342082 19803 status_manager.go:851] "Failed to get status for pod" podUID="a6d93d3d-2899-4962-a25a-712e2fb9584b" pod="openshift-kube-scheduler/installer-5-master-0" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods installer-5-master-0)" Mar 13 01:25:13.051936 master-0 kubenswrapper[19803]: E0313 01:25:13.051716 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": context deadline exceeded" interval="7s" Mar 13 01:25:13.079030 master-0 kubenswrapper[19803]: I0313 01:25:13.078935 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/1.log" Mar 13 01:25:13.081490 master-0 kubenswrapper[19803]: I0313 01:25:13.081442 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:25:19.997732 master-0 kubenswrapper[19803]: E0313 01:25:19.996990 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:25:09Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:25:09Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:25:09Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:25:09Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b\\\"],\\\"sizeBytes\\\":2895821940},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3f3a3fc0144fd075212160b467722ab529c42c226d7e87d397f821c8e7df8628\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ec7e570be8cf0476a38d4db98b0455d5b94538b5b7b2ddb3b7d8f12c724c6ddb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284752601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:50fe533376cf6d45ae7e343c58d9c480fb1bc96859ffbbdc51ce2c428de2b653\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd2f5e111c85cdeeff92a61f881c260de30a26d2d9938eef43024e637422abaa\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb\\\"],\\\"sizeBytes\\\":512235767},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88\\\"],\\\"sizeBytes\\\":502712961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:20.190839 master-0 kubenswrapper[19803]: I0313 01:25:20.190589 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:25:21.479371 master-0 kubenswrapper[19803]: I0313 01:25:21.479249 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:25:23.188773 master-0 kubenswrapper[19803]: I0313 01:25:23.188713 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/3.log" Mar 13 01:25:23.190763 master-0 kubenswrapper[19803]: I0313 01:25:23.190680 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/2.log" Mar 13 01:25:23.190913 master-0 kubenswrapper[19803]: I0313 01:25:23.190784 19803 generic.go:334] "Generic (PLEG): container finished" podID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" containerID="220c8a7eaa1c73476fd723825df13cbd502b3de33685d69020081dc52ff7896b" exitCode=1 Mar 13 01:25:23.190913 master-0 kubenswrapper[19803]: I0313 01:25:23.190845 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerDied","Data":"220c8a7eaa1c73476fd723825df13cbd502b3de33685d69020081dc52ff7896b"} Mar 13 01:25:23.191144 master-0 kubenswrapper[19803]: I0313 01:25:23.190911 19803 scope.go:117] "RemoveContainer" containerID="4b5aa7e3efa2c3abf973ba4aa1c880d4533ee27b6bf1af0cbd00347580cc6d9b" Mar 13 01:25:23.191998 master-0 kubenswrapper[19803]: I0313 01:25:23.191945 19803 scope.go:117] "RemoveContainer" containerID="220c8a7eaa1c73476fd723825df13cbd502b3de33685d69020081dc52ff7896b" Mar 13 01:25:23.192371 master-0 kubenswrapper[19803]: E0313 01:25:23.192313 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-bj5ld_openshift-cluster-storage-operator(0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" podUID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" Mar 13 01:25:24.202072 master-0 kubenswrapper[19803]: I0313 01:25:24.202021 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/3.log" Mar 13 01:25:24.479469 master-0 kubenswrapper[19803]: I0313 01:25:24.479361 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:25:24.479810 master-0 kubenswrapper[19803]: I0313 01:25:24.479460 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:28.371853 master-0 kubenswrapper[19803]: E0313 01:25:28.371758 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:25:28.371853 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294" Netns:"/var/run/netns/2c5ca7b6-6423-4dc6-8764-74565bbdd711" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:25:28.371853 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:25:28.371853 master-0 kubenswrapper[19803]: > Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: E0313 01:25:28.371896 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294" Netns:"/var/run/netns/2c5ca7b6-6423-4dc6-8764-74565bbdd711" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: E0313 01:25:28.371985 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294" Netns:"/var/run/netns/2c5ca7b6-6423-4dc6-8764-74565bbdd711" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:25:28.373054 master-0 kubenswrapper[19803]: E0313 01:25:28.372148 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294\\\" Netns:\\\"/var/run/netns/2c5ca7b6-6423-4dc6-8764-74565bbdd711\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=af8efc1cde595e47369fa8d4d91b72703886d16fff629c53ec485caa8d17e294;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" Mar 13 01:25:29.247495 master-0 kubenswrapper[19803]: I0313 01:25:29.247400 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:25:29.249408 master-0 kubenswrapper[19803]: I0313 01:25:29.248657 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:25:29.998999 master-0 kubenswrapper[19803]: E0313 01:25:29.998857 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:30.054034 master-0 kubenswrapper[19803]: E0313 01:25:30.053879 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:25:30.162871 master-0 kubenswrapper[19803]: I0313 01:25:30.162722 19803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:25:30.163148 master-0 kubenswrapper[19803]: I0313 01:25:30.162878 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:31.268207 master-0 kubenswrapper[19803]: I0313 01:25:31.268098 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/1.log" Mar 13 01:25:31.270160 master-0 kubenswrapper[19803]: I0313 01:25:31.270092 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/0.log" Mar 13 01:25:31.270274 master-0 kubenswrapper[19803]: I0313 01:25:31.270191 19803 generic.go:334] "Generic (PLEG): container finished" podID="21110b48-25fc-434a-b156-7f6bd6064bed" containerID="a06159b44add11f1f640a62febc09ee506ed8ed487eaccded51fcfacb9d58f8c" exitCode=1 Mar 13 01:25:31.270274 master-0 kubenswrapper[19803]: I0313 01:25:31.270249 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerDied","Data":"a06159b44add11f1f640a62febc09ee506ed8ed487eaccded51fcfacb9d58f8c"} Mar 13 01:25:31.270435 master-0 kubenswrapper[19803]: I0313 01:25:31.270308 19803 scope.go:117] "RemoveContainer" containerID="0cfdb95efdc8432bdd4633516711c41c3cb5e31aacb0fb3f7ab64226c6ff685f" Mar 13 01:25:31.271232 master-0 kubenswrapper[19803]: I0313 01:25:31.271171 19803 scope.go:117] "RemoveContainer" containerID="a06159b44add11f1f640a62febc09ee506ed8ed487eaccded51fcfacb9d58f8c" Mar 13 01:25:31.272539 master-0 kubenswrapper[19803]: E0313 01:25:31.271811 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" podUID="21110b48-25fc-434a-b156-7f6bd6064bed" Mar 13 01:25:32.282875 master-0 kubenswrapper[19803]: I0313 01:25:32.282799 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/1.log" Mar 13 01:25:33.262794 master-0 kubenswrapper[19803]: E0313 01:25:33.262499 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event=< Mar 13 01:25:33.262794 master-0 kubenswrapper[19803]: &Event{ObjectMeta:{kube-controller-manager-master-0.189c420538e024bf openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:24e04786030519cf5fd9f600ea6710e9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:ProbeError,Message:Liveness probe error: Get "https://192.168.32.10:10257/healthz": dial tcp 192.168.32.10:10257: connect: connection refused Mar 13 01:25:33.262794 master-0 kubenswrapper[19803]: body: Mar 13 01:25:33.262794 master-0 kubenswrapper[19803]: ,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:23.473616063 +0000 UTC m=+291.438763742,LastTimestamp:2026-03-13 01:22:23.473616063 +0000 UTC m=+291.438763742,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,} Mar 13 01:25:33.262794 master-0 kubenswrapper[19803]: > Mar 13 01:25:34.101838 master-0 kubenswrapper[19803]: I0313 01:25:34.101702 19803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Liveness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:25:34.102880 master-0 kubenswrapper[19803]: I0313 01:25:34.101862 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:34.480212 master-0 kubenswrapper[19803]: I0313 01:25:34.480128 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:25:34.480729 master-0 kubenswrapper[19803]: I0313 01:25:34.480665 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:38.314844 master-0 kubenswrapper[19803]: I0313 01:25:38.314741 19803 scope.go:117] "RemoveContainer" containerID="220c8a7eaa1c73476fd723825df13cbd502b3de33685d69020081dc52ff7896b" Mar 13 01:25:38.315542 master-0 kubenswrapper[19803]: E0313 01:25:38.315249 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-bj5ld_openshift-cluster-storage-operator(0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" podUID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" Mar 13 01:25:39.999999 master-0 kubenswrapper[19803]: E0313 01:25:39.999853 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:40.162487 master-0 kubenswrapper[19803]: I0313 01:25:40.162364 19803 patch_prober.go:28] interesting pod/openshift-kube-scheduler-master-0 container/kube-scheduler namespace/openshift-kube-scheduler: Readiness probe status=failure output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:25:40.162487 master-0 kubenswrapper[19803]: I0313 01:25:40.162500 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" probeResult="failure" output="Get \"https://192.168.32.10:10259/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:41.515328 master-0 kubenswrapper[19803]: E0313 01:25:41.515240 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:25:41.515328 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da" Netns:"/var/run/netns/fba60410-295c-4dbd-94d9-19cc8b3d617a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:25:41.515328 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:25:41.515328 master-0 kubenswrapper[19803]: > Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: E0313 01:25:41.515360 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da" Netns:"/var/run/netns/fba60410-295c-4dbd-94d9-19cc8b3d617a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: E0313 01:25:41.515399 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da" Netns:"/var/run/netns/fba60410-295c-4dbd-94d9-19cc8b3d617a" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:25:41.516296 master-0 kubenswrapper[19803]: E0313 01:25:41.515534 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da\\\" Netns:\\\"/var/run/netns/fba60410-295c-4dbd-94d9-19cc8b3d617a\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=241adbbadf97ca69374b0fa759c50b7c924f213977d2ff242e40086fd9eff3da;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/prometheus-k8s-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" Mar 13 01:25:42.377150 master-0 kubenswrapper[19803]: I0313 01:25:42.377037 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:25:42.378239 master-0 kubenswrapper[19803]: I0313 01:25:42.378194 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:25:42.548379 master-0 kubenswrapper[19803]: I0313 01:25:42.548282 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:33854->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 13 01:25:42.550418 master-0 kubenswrapper[19803]: I0313 01:25:42.548376 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:33854->127.0.0.1:10357: read: connection reset by peer" Mar 13 01:25:42.550418 master-0 kubenswrapper[19803]: I0313 01:25:42.548446 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:25:42.550418 master-0 kubenswrapper[19803]: I0313 01:25:42.549401 19803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"70a2f08e906dc96f6b52818590dddea1c35c051fc9dc0614447379a9118e65f9"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 13 01:25:42.550418 master-0 kubenswrapper[19803]: I0313 01:25:42.549486 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" containerID="cri-o://70a2f08e906dc96f6b52818590dddea1c35c051fc9dc0614447379a9118e65f9" gracePeriod=30 Mar 13 01:25:43.028688 master-0 kubenswrapper[19803]: E0313 01:25:43.028413 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 01:25:43.315282 master-0 kubenswrapper[19803]: I0313 01:25:43.315175 19803 scope.go:117] "RemoveContainer" containerID="a06159b44add11f1f640a62febc09ee506ed8ed487eaccded51fcfacb9d58f8c" Mar 13 01:25:43.392126 master-0 kubenswrapper[19803]: I0313 01:25:43.392046 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/2.log" Mar 13 01:25:43.392963 master-0 kubenswrapper[19803]: I0313 01:25:43.392903 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/1.log" Mar 13 01:25:43.400685 master-0 kubenswrapper[19803]: I0313 01:25:43.400591 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:25:43.400799 master-0 kubenswrapper[19803]: I0313 01:25:43.400679 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="70a2f08e906dc96f6b52818590dddea1c35c051fc9dc0614447379a9118e65f9" exitCode=255 Mar 13 01:25:43.400799 master-0 kubenswrapper[19803]: I0313 01:25:43.400737 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerDied","Data":"70a2f08e906dc96f6b52818590dddea1c35c051fc9dc0614447379a9118e65f9"} Mar 13 01:25:43.400799 master-0 kubenswrapper[19803]: I0313 01:25:43.400791 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53"} Mar 13 01:25:43.400929 master-0 kubenswrapper[19803]: I0313 01:25:43.400827 19803 scope.go:117] "RemoveContainer" containerID="1ddb833fd5fea377be553b539334e827a1b9d7511648f7ab96694e420bb21512" Mar 13 01:25:44.413984 master-0 kubenswrapper[19803]: I0313 01:25:44.413930 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/2.log" Mar 13 01:25:44.427117 master-0 kubenswrapper[19803]: I0313 01:25:44.417637 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:25:44.427117 master-0 kubenswrapper[19803]: I0313 01:25:44.424578 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"575cb640fa895d41b25b2f7a6e6e1e3b1f1eaf3bf3f36eb38f770010a5185753"} Mar 13 01:25:44.427117 master-0 kubenswrapper[19803]: I0313 01:25:44.424664 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"82e3a3940fc1b3df65fbf243864f723da2fc0b90ba26df9668e241ea773c4905"} Mar 13 01:25:44.427117 master-0 kubenswrapper[19803]: I0313 01:25:44.424715 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d8eda57588a46251b1a955c11374e73c42e099ccfc66f6e8ef95fa8e145d1042"} Mar 13 01:25:44.428211 master-0 kubenswrapper[19803]: I0313 01:25:44.428140 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/1.log" Mar 13 01:25:44.428747 master-0 kubenswrapper[19803]: I0313 01:25:44.428691 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerStarted","Data":"0135ba4974603368e77b84b78bf31b903fe3d5cdbea51d504385ae47de44d443"} Mar 13 01:25:45.446817 master-0 kubenswrapper[19803]: I0313 01:25:45.446649 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"7f9e148f220a66f914a3b2c333ff29602d2e0222e87325da45120f53fa9494fa"} Mar 13 01:25:45.446817 master-0 kubenswrapper[19803]: I0313 01:25:45.446744 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-master-0" event={"ID":"29c709c82970b529e7b9b895aa92ef05","Type":"ContainerStarted","Data":"d9f12a138d0c60c2c16cf083b1e81af0e686ea1d4ec72515ced4b2f5e1254f41"} Mar 13 01:25:45.448328 master-0 kubenswrapper[19803]: I0313 01:25:45.447220 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:25:45.448328 master-0 kubenswrapper[19803]: I0313 01:25:45.447264 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:25:47.055651 master-0 kubenswrapper[19803]: E0313 01:25:47.055323 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:25:48.331407 master-0 kubenswrapper[19803]: I0313 01:25:48.331306 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:25:50.001191 master-0 kubenswrapper[19803]: E0313 01:25:50.001074 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:50.190699 master-0 kubenswrapper[19803]: I0313 01:25:50.190608 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:25:50.357079 master-0 kubenswrapper[19803]: I0313 01:25:50.356875 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-master-0" Mar 13 01:25:50.876423 master-0 kubenswrapper[19803]: I0313 01:25:50.876341 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:25:50.876712 master-0 kubenswrapper[19803]: E0313 01:25:50.876648 19803 projected.go:288] Couldn't get configMap openshift-kube-controller-manager/kube-root-ca.crt: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:25:50.876712 master-0 kubenswrapper[19803]: E0313 01:25:50.876704 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-controller-manager/installer-2-master-0: object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:25:50.876860 master-0 kubenswrapper[19803]: E0313 01:25:50.876783 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access podName:7106c6fe-7c8d-45b9-bc5c-521db743663f nodeName:}" failed. No retries permitted until 2026-03-13 01:27:52.876755128 +0000 UTC m=+620.841902827 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access") pod "installer-2-master-0" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f") : object "openshift-kube-controller-manager"/"kube-root-ca.crt" not registered Mar 13 01:25:50.876860 master-0 kubenswrapper[19803]: I0313 01:25:50.876651 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:25:50.876990 master-0 kubenswrapper[19803]: E0313 01:25:50.876838 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:25:50.876990 master-0 kubenswrapper[19803]: E0313 01:25:50.876886 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:25:50.876990 master-0 kubenswrapper[19803]: E0313 01:25:50.876965 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:27:52.876944953 +0000 UTC m=+620.842092652 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:25:51.480559 master-0 kubenswrapper[19803]: I0313 01:25:51.480433 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:25:52.315404 master-0 kubenswrapper[19803]: I0313 01:25:52.315312 19803 scope.go:117] "RemoveContainer" containerID="220c8a7eaa1c73476fd723825df13cbd502b3de33685d69020081dc52ff7896b" Mar 13 01:25:53.530822 master-0 kubenswrapper[19803]: I0313 01:25:53.530726 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/3.log" Mar 13 01:25:53.532037 master-0 kubenswrapper[19803]: I0313 01:25:53.530872 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerStarted","Data":"cf9aa79d7c848ce2a6eacd046ecf770aa18df700f052345baaa9f27169958c79"} Mar 13 01:25:54.481244 master-0 kubenswrapper[19803]: I0313 01:25:54.481094 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:25:54.481710 master-0 kubenswrapper[19803]: I0313 01:25:54.481246 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:25:55.356926 master-0 kubenswrapper[19803]: I0313 01:25:55.356833 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-master-0" Mar 13 01:25:55.387740 master-0 kubenswrapper[19803]: I0313 01:25:55.387648 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-master-0" Mar 13 01:26:00.001919 master-0 kubenswrapper[19803]: E0313 01:26:00.001808 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:26:00.001919 master-0 kubenswrapper[19803]: E0313 01:26:00.001879 19803 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Mar 13 01:26:00.379803 master-0 kubenswrapper[19803]: I0313 01:26:00.376738 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-master-0" Mar 13 01:26:04.057950 master-0 kubenswrapper[19803]: E0313 01:26:04.057844 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:26:04.480620 master-0 kubenswrapper[19803]: I0313 01:26:04.480486 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Mar 13 01:26:04.480921 master-0 kubenswrapper[19803]: I0313 01:26:04.480652 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Mar 13 01:26:07.266455 master-0 kubenswrapper[19803]: E0313 01:26:07.266266 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c420538e13982 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:24e04786030519cf5fd9f600ea6710e9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Unhealthy,Message:Liveness probe failed: Get \"https://192.168.32.10:10257/healthz\": dial tcp 192.168.32.10:10257: connect: connection refused,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:23.473686914 +0000 UTC m=+291.438834593,LastTimestamp:2026-03-13 01:22:23.473686914 +0000 UTC m=+291.438834593,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:26:12.343618 master-0 kubenswrapper[19803]: I0313 01:26:12.343438 19803 status_manager.go:851] "Failed to get status for pod" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" pod="openshift-multus/cni-sysctl-allowlist-ds-thhrl" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods cni-sysctl-allowlist-ds-thhrl)" Mar 13 01:26:13.228683 master-0 kubenswrapper[19803]: I0313 01:26:13.228567 19803 patch_prober.go:28] interesting pod/kube-controller-manager-master-0 container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:59190->127.0.0.1:10357: read: connection reset by peer" start-of-body= Mar 13 01:26:13.229067 master-0 kubenswrapper[19803]: I0313 01:26:13.228694 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": read tcp 127.0.0.1:59190->127.0.0.1:10357: read: connection reset by peer" Mar 13 01:26:13.229067 master-0 kubenswrapper[19803]: I0313 01:26:13.228898 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:26:13.231075 master-0 kubenswrapper[19803]: I0313 01:26:13.230272 19803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="cluster-policy-controller" containerStatusID={"Type":"cri-o","ID":"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53"} pod="openshift-kube-controller-manager/kube-controller-manager-master-0" containerMessage="Container cluster-policy-controller failed startup probe, will be restarted" Mar 13 01:26:13.231075 master-0 kubenswrapper[19803]: I0313 01:26:13.230471 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" containerID="cri-o://a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" gracePeriod=30 Mar 13 01:26:13.258820 master-0 kubenswrapper[19803]: E0313 01:26:13.258750 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(24e04786030519cf5fd9f600ea6710e9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" Mar 13 01:26:13.725930 master-0 kubenswrapper[19803]: I0313 01:26:13.725732 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/3.log" Mar 13 01:26:13.727275 master-0 kubenswrapper[19803]: I0313 01:26:13.727202 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/2.log" Mar 13 01:26:13.729478 master-0 kubenswrapper[19803]: I0313 01:26:13.729422 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:26:13.729641 master-0 kubenswrapper[19803]: I0313 01:26:13.729554 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" exitCode=255 Mar 13 01:26:13.729641 master-0 kubenswrapper[19803]: I0313 01:26:13.729602 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerDied","Data":"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53"} Mar 13 01:26:13.729750 master-0 kubenswrapper[19803]: I0313 01:26:13.729652 19803 scope.go:117] "RemoveContainer" containerID="70a2f08e906dc96f6b52818590dddea1c35c051fc9dc0614447379a9118e65f9" Mar 13 01:26:13.730766 master-0 kubenswrapper[19803]: I0313 01:26:13.730726 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:26:13.731238 master-0 kubenswrapper[19803]: E0313 01:26:13.731174 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(24e04786030519cf5fd9f600ea6710e9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" Mar 13 01:26:14.742955 master-0 kubenswrapper[19803]: I0313 01:26:14.742840 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/3.log" Mar 13 01:26:14.746831 master-0 kubenswrapper[19803]: I0313 01:26:14.746763 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:26:19.184258 master-0 kubenswrapper[19803]: I0313 01:26:19.184174 19803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:26:19.185444 master-0 kubenswrapper[19803]: I0313 01:26:19.184907 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:26:19.185444 master-0 kubenswrapper[19803]: E0313 01:26:19.185194 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(24e04786030519cf5fd9f600ea6710e9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" Mar 13 01:26:19.451712 master-0 kubenswrapper[19803]: E0313 01:26:19.451494 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 01:26:19.797485 master-0 kubenswrapper[19803]: I0313 01:26:19.797412 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:26:19.797948 master-0 kubenswrapper[19803]: I0313 01:26:19.797915 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:26:20.069477 master-0 kubenswrapper[19803]: E0313 01:26:20.069089 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:26:10Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:26:10Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:26:10Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-03-13T01:26:10Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3e7365fa46219476560dd59d3a82f041546a33f0935c57eb4f3274ab3118ef0b\\\"],\\\"sizeBytes\\\":2895821940},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:1295a1f0e74ae87f51a733e28b64c6fdb6b9a5b069a6897b3870fe52cc1c3b0b\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:505eeaa3f051e9f4ea6a622aca92e5c4eae07078ca185d9fecfe8cc9b6dfc899\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1739173859},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:82f121f9d021a9843b9458f9f222c40f292f2c21dcfcf00f05daacaca8a949c0\\\"],\\\"sizeBytes\\\":1637445817},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:3f3a3fc0144fd075212160b467722ab529c42c226d7e87d397f821c8e7df8628\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:ec7e570be8cf0476a38d4db98b0455d5b94538b5b7b2ddb3b7d8f12c724c6ddb\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1284752601},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:381e96959e3c3b08a3e2715e6024697ae14af31bd0378b49f583e984b3b9a192\\\"],\\\"sizeBytes\\\":1238047254},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:898c67bf7fc973e99114f3148976a6c21ae0dbe413051415588fa9b995f5b331\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:a641939d2096609a4cf6eec872a1476b7c671bfd81cffc2edeb6e9f13c9deeba\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1231028434},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:50fe533376cf6d45ae7e343c58d9c480fb1bc96859ffbbdc51ce2c428de2b653\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:dd2f5e111c85cdeeff92a61f881c260de30a26d2d9938eef43024e637422abaa\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1221745878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c9330c756dd6ab107e9a4b671bc52742c90d5be11a8380d8b710e2bd4e0ed43c\\\"],\\\"sizeBytes\\\":992610645},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\\\"],\\\"sizeBytes\\\":943837171},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff40e33e63d6c1f4e4393d5506e38def25ba20582d980fec8b81f81c867ceeec\\\"],\\\"sizeBytes\\\":918278686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:042e6a37747405da54cf91543d44408c9531327a2cce653c41ca851aa7c896d8\\\"],\\\"sizeBytes\\\":880378279},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e207c762b7802ee0e54507d21ed1f25b19eddc511a4b824934c16c163193be6a\\\"],\\\"sizeBytes\\\":876146500},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:41dbd66e9a886c1fd7a99752f358c6125a209e83c0dd37b35730baae58d82ee8\\\"],\\\"sizeBytes\\\":862633255},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2508a5f66e509e813cb09825b5456be91b4cdd4d02f470f22a33de42c753f2b7\\\"],\\\"sizeBytes\\\":862197440},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bfcd8017eede3fb66fa3f5b47c27508b787d38455689154461f0e6a5dc303ff\\\"],\\\"sizeBytes\\\":772939850},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9c946fdc5a4cd16ff998c17844780e7efc38f7f38b97a8a40d75cd77b318ddef\\\"],\\\"sizeBytes\\\":687947017},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0c03cb25dc6f6a865529ebc979e8d7d08492b28fd3fb93beddf30e1cb06f1245\\\"],\\\"sizeBytes\\\":683169303},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3f34dc492c80a3dee4643cc2291044750ac51e6e919b973de8723fa8b70bde70\\\"],\\\"sizeBytes\\\":677929075},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a149ed17b20a7577fceacfc5198f8b7b3edf314ee22f77bd6ab87f06a3aa17f3\\\"],\\\"sizeBytes\\\":621647686},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1ec9d3dbcc6f9817c0f6d09f64c0d98c91b03afbb1fcb3c1e1718aca900754b\\\"],\\\"sizeBytes\\\":589379637},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1575be013a898f153cbf012aeaf28ce720022f934dc05bdffbe479e30999d460\\\"],\\\"sizeBytes\\\":582153879},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:eb82e437a701ce83b70e56be8477d987da67578714dda3d9fa6628804b1b56f5\\\"],\\\"sizeBytes\\\":558210153},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d470dba32064cc62b2ab29303d6e00612304548262eaa2f4e5b40a00a26f71ce\\\"],\\\"sizeBytes\\\":557426734},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:28f33d62fd0b94c5ea0ebcd7a4216848c8dd671a38d901ce98f4c399b700e1c7\\\"],\\\"sizeBytes\\\":548751793},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc20748723f55f960cfb6328d1591880bbd1b3452155633996d4f41fc7c5f46b\\\"],\\\"sizeBytes\\\":529324693},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ac6f0695d3386e6d601f4ae507940981352fa3ad884b0fed6fb25698c5e6f916\\\"],\\\"sizeBytes\\\":528946249},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6088910bdc1583b275fab261e3234c0b63b4cc16d01bcea697b6a7f6db13bdf3\\\"],\\\"sizeBytes\\\":518384455},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-release@sha256:14bd3c04daa885009785d48f4973e2890751a7ec116cc14d17627245cda54d7b\\\"],\\\"sizeBytes\\\":517997625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5500329ab50804678fb8a90b96bf2a469bca16b620fb6dd2f5f5a17106e94898\\\"],\\\"sizeBytes\\\":514980169},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd0b71d620cf0acbfcd1b58797dc30050bd167cb6b7a7f62c8333dd370c76d5\\\"],\\\"sizeBytes\\\":513581866},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9bd818e37e1f9dbe5393c557b89e81010d68171408e0e4157a3d92ae0ca1c953\\\"],\\\"sizeBytes\\\":513220825},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d601c8437b4d8bbe2da0f3b08f1bd8693f5a4ef6d835377ec029c79d9dca5dab\\\"],\\\"sizeBytes\\\":512273539},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ca868abfecbf9a9c414a4c79e57c4c55e62c8a6796f899ba59dde86c4cf4bb\\\"],\\\"sizeBytes\\\":512235767},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b47d2b146e833bc1612a652136f43afcf1ba30f32cbd0a2f06ca9fc80d969f0\\\"],\\\"sizeBytes\\\":511226810},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:834063dd26fb3d2489e193489198a0d5fbe9c775a0e30173e5fcef6994fbf0f6\\\"],\\\"sizeBytes\\\":511164376},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee46e13e26156c904e5784e2d64511021ed0974a169ccd6476b05bff1c44ec56\\\"],\\\"sizeBytes\\\":508888174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7220d16ea511c0f0410cf45db45aaafcc64847c9cb5732ad1eff39ceb482cdba\\\"],\\\"sizeBytes\\\":508544235},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:526c5c02a8fa86a2fa83a7087d4a5c4b1c4072c0f3906163494cc3b3c1295e9b\\\"],\\\"sizeBytes\\\":507967997},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4010a8f9d932615336227e2fd43325d4fa9025dca4bebe032106efea733fcfc3\\\"],\\\"sizeBytes\\\":506479655},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76b719f5bd541eb1a8bae124d650896b533e7bc3107be536e598b3ab4e135282\\\"],\\\"sizeBytes\\\":506394574},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5de69354d08184ecd6144facc1461777674674e8304971216d4cf1a5025472b9\\\"],\\\"sizeBytes\\\":505344964},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a324f47cf789c0480fa4bcb0812152abc3cd844318bab193108fe4349eed609\\\"],\\\"sizeBytes\\\":505242594},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b8cb5e0caeca0fb02f3e8c72b7ddf1c49e3c602e42e119ba30c60525f1db1821\\\"],\\\"sizeBytes\\\":504658657},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d11f13e867f4df046ca6789bb7273da5d0c08895b3dea00949c8a5458f9e22f9\\\"],\\\"sizeBytes\\\":504623546},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2c8f904c1084450856b501d40bbc9246265fe34a2b70efec23541e3285da7f88\\\"],\\\"sizeBytes\\\":502712961},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:76bdc35338c4d0f5e5b9448fb73e3578656f908a962286692e12a0372ec721d5\\\"],\\\"sizeBytes\\\":495994161},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ff2db11ce277288befab25ddb86177e832842d2edb5607a2da8f252a030e1cfc\\\"],\\\"sizeBytes\\\":495064829},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9b2e765b795c30c910c331c85226e5db0d56463b6c81d79ded739cba76e2b032\\\"],\\\"sizeBytes\\\":487151732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2fe5144b1f72bdcf5d5a52130f02ed86fbec3875cc4ac108ead00eaac1659e06\\\"],\\\"sizeBytes\\\":487090672},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4a4c3e6ca0cd26f7eb5270cfafbcf423cf2986d152bf5b9fc6469d40599e104e\\\"],\\\"sizeBytes\\\":484450382}]}}\" for node \"master-0\": Patch \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0/status?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:26:21.059054 master-0 kubenswrapper[19803]: E0313 01:26:21.058887 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:26:22.841214 master-0 kubenswrapper[19803]: I0313 01:26:22.841145 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/4.log" Mar 13 01:26:22.842089 master-0 kubenswrapper[19803]: I0313 01:26:22.842016 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/3.log" Mar 13 01:26:22.842089 master-0 kubenswrapper[19803]: I0313 01:26:22.842071 19803 generic.go:334] "Generic (PLEG): container finished" podID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" containerID="cf9aa79d7c848ce2a6eacd046ecf770aa18df700f052345baaa9f27169958c79" exitCode=1 Mar 13 01:26:22.842242 master-0 kubenswrapper[19803]: I0313 01:26:22.842121 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerDied","Data":"cf9aa79d7c848ce2a6eacd046ecf770aa18df700f052345baaa9f27169958c79"} Mar 13 01:26:22.842242 master-0 kubenswrapper[19803]: I0313 01:26:22.842173 19803 scope.go:117] "RemoveContainer" containerID="220c8a7eaa1c73476fd723825df13cbd502b3de33685d69020081dc52ff7896b" Mar 13 01:26:22.843159 master-0 kubenswrapper[19803]: I0313 01:26:22.843114 19803 scope.go:117] "RemoveContainer" containerID="cf9aa79d7c848ce2a6eacd046ecf770aa18df700f052345baaa9f27169958c79" Mar 13 01:26:22.843745 master-0 kubenswrapper[19803]: E0313 01:26:22.843685 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-bj5ld_openshift-cluster-storage-operator(0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" podUID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" Mar 13 01:26:23.854261 master-0 kubenswrapper[19803]: I0313 01:26:23.854184 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/4.log" Mar 13 01:26:29.917821 master-0 kubenswrapper[19803]: E0313 01:26:29.917731 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:26:29.917821 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed" Netns:"/var/run/netns/79ef201f-2724-4e7c-9a65-dcc6cc39b562" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:26:29.917821 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:26:29.917821 master-0 kubenswrapper[19803]: > Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: E0313 01:26:29.917842 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed" Netns:"/var/run/netns/79ef201f-2724-4e7c-9a65-dcc6cc39b562" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: E0313 01:26:29.917900 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed" Netns:"/var/run/netns/79ef201f-2724-4e7c-9a65-dcc6cc39b562" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c" Path:"" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get "https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:26:29.918946 master-0 kubenswrapper[19803]: E0313 01:26:29.918125 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"alertmanager-main-0_openshift-monitoring(8b300a46-0e04-4109-a370-2589ce3efa0c)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_alertmanager-main-0_openshift-monitoring_8b300a46-0e04-4109-a370-2589ce3efa0c_0(c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed): error adding pod openshift-monitoring_alertmanager-main-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed\\\" Netns:\\\"/var/run/netns/79ef201f-2724-4e7c-9a65-dcc6cc39b562\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=alertmanager-main-0;K8S_POD_INFRA_CONTAINER_ID=c9f346abaf0f074ea514f63c540980c31ccc1d16aa999d5e9f8d2a1e2d6ab9ed;K8S_POD_UID=8b300a46-0e04-4109-a370-2589ce3efa0c\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/alertmanager-main-0] networking: Multus: [openshift-monitoring/alertmanager-main-0/8b300a46-0e04-4109-a370-2589ce3efa0c]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod alertmanager-main-0 in out of cluster comm: SetNetworkStatus: failed to update the pod alertmanager-main-0 in out of cluster comm: status update failed for pod /: Get \\\"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-monitoring/pods/alertmanager-main-0?timeout=1m0s\\\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" Mar 13 01:26:30.070631 master-0 kubenswrapper[19803]: E0313 01:26:30.070466 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:26:30.931003 master-0 kubenswrapper[19803]: I0313 01:26:30.930913 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:26:30.932327 master-0 kubenswrapper[19803]: I0313 01:26:30.932262 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:26:32.315580 master-0 kubenswrapper[19803]: I0313 01:26:32.315119 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:26:32.316634 master-0 kubenswrapper[19803]: E0313 01:26:32.315648 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(24e04786030519cf5fd9f600ea6710e9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" Mar 13 01:26:36.315083 master-0 kubenswrapper[19803]: I0313 01:26:36.315000 19803 scope.go:117] "RemoveContainer" containerID="cf9aa79d7c848ce2a6eacd046ecf770aa18df700f052345baaa9f27169958c79" Mar 13 01:26:36.316112 master-0 kubenswrapper[19803]: E0313 01:26:36.315396 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-bj5ld_openshift-cluster-storage-operator(0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" podUID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" Mar 13 01:26:38.060781 master-0 kubenswrapper[19803]: E0313 01:26:38.060670 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="7s" Mar 13 01:26:40.071825 master-0 kubenswrapper[19803]: E0313 01:26:40.071727 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:26:41.270822 master-0 kubenswrapper[19803]: E0313 01:26:41.270583 19803 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-controller-manager-master-0.189c4205444ba439 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-master-0,UID:24e04786030519cf5fd9f600ea6710e9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fca00eb71b1f03e5b5180a66f3871f5626d337b56196622f5842cfc165523b4\" already present on machine,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:22:23.665210425 +0000 UTC m=+291.630358134,LastTimestamp:2026-03-13 01:22:23.665210425 +0000 UTC m=+291.630358134,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:26:41.692817 master-0 kubenswrapper[19803]: I0313 01:26:41.692599 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:41.692817 master-0 kubenswrapper[19803]: I0313 01:26:41.692717 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:42.030736 master-0 kubenswrapper[19803]: I0313 01:26:42.030646 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/machine-api-operator/0.log" Mar 13 01:26:42.031401 master-0 kubenswrapper[19803]: I0313 01:26:42.031349 19803 generic.go:334] "Generic (PLEG): container finished" podID="2760a216-fd4b-46d9-a4ec-2d3285ec02bd" containerID="746d63b70b482e97e137cf2a5fbc732604b747973c61d366e9b68a115a9813fc" exitCode=255 Mar 13 01:26:42.031575 master-0 kubenswrapper[19803]: I0313 01:26:42.031476 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" event={"ID":"2760a216-fd4b-46d9-a4ec-2d3285ec02bd","Type":"ContainerDied","Data":"746d63b70b482e97e137cf2a5fbc732604b747973c61d366e9b68a115a9813fc"} Mar 13 01:26:42.033014 master-0 kubenswrapper[19803]: I0313 01:26:42.032792 19803 scope.go:117] "RemoveContainer" containerID="746d63b70b482e97e137cf2a5fbc732604b747973c61d366e9b68a115a9813fc" Mar 13 01:26:42.035030 master-0 kubenswrapper[19803]: I0313 01:26:42.034950 19803 generic.go:334] "Generic (PLEG): container finished" podID="c55a215a-9a95-4f48-8668-9b76503c3044" containerID="735be8b153188a56b409f008bb739a615b04b0b4c113e5995034ae8189be2847" exitCode=0 Mar 13 01:26:42.035089 master-0 kubenswrapper[19803]: I0313 01:26:42.035065 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" event={"ID":"c55a215a-9a95-4f48-8668-9b76503c3044","Type":"ContainerDied","Data":"735be8b153188a56b409f008bb739a615b04b0b4c113e5995034ae8189be2847"} Mar 13 01:26:42.036048 master-0 kubenswrapper[19803]: I0313 01:26:42.035889 19803 scope.go:117] "RemoveContainer" containerID="735be8b153188a56b409f008bb739a615b04b0b4c113e5995034ae8189be2847" Mar 13 01:26:42.043630 master-0 kubenswrapper[19803]: I0313 01:26:42.037442 19803 generic.go:334] "Generic (PLEG): container finished" podID="91fc568a-61ad-400e-a54e-21d62e51bb17" containerID="73dc7164c08f806e20d59b39c0dd97779a41348b9dd0a6d8c110bba4b0c80b70" exitCode=0 Mar 13 01:26:42.043630 master-0 kubenswrapper[19803]: I0313 01:26:42.037568 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" event={"ID":"91fc568a-61ad-400e-a54e-21d62e51bb17","Type":"ContainerDied","Data":"73dc7164c08f806e20d59b39c0dd97779a41348b9dd0a6d8c110bba4b0c80b70"} Mar 13 01:26:42.043630 master-0 kubenswrapper[19803]: I0313 01:26:42.038044 19803 scope.go:117] "RemoveContainer" containerID="73dc7164c08f806e20d59b39c0dd97779a41348b9dd0a6d8c110bba4b0c80b70" Mar 13 01:26:42.043630 master-0 kubenswrapper[19803]: I0313 01:26:42.043259 19803 generic.go:334] "Generic (PLEG): container finished" podID="250a32b4-cc8d-43fa-9dd1-0a8d85a2739a" containerID="c427477a6d58f1162b2fff7d8283200b9284d7a746e34cf1c1801ed10b839ebf" exitCode=0 Mar 13 01:26:42.043630 master-0 kubenswrapper[19803]: I0313 01:26:42.043345 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" event={"ID":"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a","Type":"ContainerDied","Data":"c427477a6d58f1162b2fff7d8283200b9284d7a746e34cf1c1801ed10b839ebf"} Mar 13 01:26:42.044685 master-0 kubenswrapper[19803]: I0313 01:26:42.043918 19803 scope.go:117] "RemoveContainer" containerID="c427477a6d58f1162b2fff7d8283200b9284d7a746e34cf1c1801ed10b839ebf" Mar 13 01:26:42.047710 master-0 kubenswrapper[19803]: I0313 01:26:42.047410 19803 generic.go:334] "Generic (PLEG): container finished" podID="77e6cd9e-b6ef-491c-a5c3-60dab81fd752" containerID="ba3486eb82a9ab1039bbc9db6456f118857b681bba7748f0325d9592ed3693f6" exitCode=0 Mar 13 01:26:42.047710 master-0 kubenswrapper[19803]: I0313 01:26:42.047485 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerDied","Data":"ba3486eb82a9ab1039bbc9db6456f118857b681bba7748f0325d9592ed3693f6"} Mar 13 01:26:42.047710 master-0 kubenswrapper[19803]: I0313 01:26:42.047542 19803 scope.go:117] "RemoveContainer" containerID="dcf6d152312d68c0bbfc80742b97a5a67fe4e1a416cc5f56000de592b4daaaa8" Mar 13 01:26:42.048441 master-0 kubenswrapper[19803]: I0313 01:26:42.047998 19803 scope.go:117] "RemoveContainer" containerID="ba3486eb82a9ab1039bbc9db6456f118857b681bba7748f0325d9592ed3693f6" Mar 13 01:26:42.052350 master-0 kubenswrapper[19803]: I0313 01:26:42.050237 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-pj26h_53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/package-server-manager/0.log" Mar 13 01:26:42.052350 master-0 kubenswrapper[19803]: I0313 01:26:42.051585 19803 generic.go:334] "Generic (PLEG): container finished" podID="53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59" containerID="e44a4909dcffde49ad35027597a7d7ccdbfe6e7971eece0f54a4f97505f5966a" exitCode=1 Mar 13 01:26:42.052350 master-0 kubenswrapper[19803]: I0313 01:26:42.051650 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" event={"ID":"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59","Type":"ContainerDied","Data":"e44a4909dcffde49ad35027597a7d7ccdbfe6e7971eece0f54a4f97505f5966a"} Mar 13 01:26:42.052350 master-0 kubenswrapper[19803]: I0313 01:26:42.052009 19803 scope.go:117] "RemoveContainer" containerID="e44a4909dcffde49ad35027597a7d7ccdbfe6e7971eece0f54a4f97505f5966a" Mar 13 01:26:42.064296 master-0 kubenswrapper[19803]: I0313 01:26:42.060226 19803 generic.go:334] "Generic (PLEG): container finished" podID="dbcb4b80-425a-4dd5-93a8-bb462f641ef1" containerID="f17ab172b3fc00e3c3a0f9da9bed1e16efebdb5c429420e3295dc5cc1f9a7534" exitCode=0 Mar 13 01:26:42.064296 master-0 kubenswrapper[19803]: I0313 01:26:42.060339 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" event={"ID":"dbcb4b80-425a-4dd5-93a8-bb462f641ef1","Type":"ContainerDied","Data":"f17ab172b3fc00e3c3a0f9da9bed1e16efebdb5c429420e3295dc5cc1f9a7534"} Mar 13 01:26:42.064296 master-0 kubenswrapper[19803]: I0313 01:26:42.061848 19803 scope.go:117] "RemoveContainer" containerID="f17ab172b3fc00e3c3a0f9da9bed1e16efebdb5c429420e3295dc5cc1f9a7534" Mar 13 01:26:42.068461 master-0 kubenswrapper[19803]: I0313 01:26:42.068395 19803 generic.go:334] "Generic (PLEG): container finished" podID="c6db75e5-efd1-4bfa-9941-0934d7621ba2" containerID="769c129b7e29d4929952316ce6f7641c3c7ac9955f6a84df03be0a0cf43a0023" exitCode=0 Mar 13 01:26:42.068614 master-0 kubenswrapper[19803]: I0313 01:26:42.068553 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" event={"ID":"c6db75e5-efd1-4bfa-9941-0934d7621ba2","Type":"ContainerDied","Data":"769c129b7e29d4929952316ce6f7641c3c7ac9955f6a84df03be0a0cf43a0023"} Mar 13 01:26:42.069301 master-0 kubenswrapper[19803]: I0313 01:26:42.069245 19803 scope.go:117] "RemoveContainer" containerID="769c129b7e29d4929952316ce6f7641c3c7ac9955f6a84df03be0a0cf43a0023" Mar 13 01:26:42.071967 master-0 kubenswrapper[19803]: I0313 01:26:42.071858 19803 generic.go:334] "Generic (PLEG): container finished" podID="fbfc2caf-126e-41b9-9b31-05f7a45d8536" containerID="f3648127120432d42351630482fc5ec1314543a47769068b1c6a7ef537aa3e64" exitCode=0 Mar 13 01:26:42.072149 master-0 kubenswrapper[19803]: I0313 01:26:42.071918 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" event={"ID":"fbfc2caf-126e-41b9-9b31-05f7a45d8536","Type":"ContainerDied","Data":"f3648127120432d42351630482fc5ec1314543a47769068b1c6a7ef537aa3e64"} Mar 13 01:26:42.073277 master-0 kubenswrapper[19803]: I0313 01:26:42.073202 19803 scope.go:117] "RemoveContainer" containerID="f3648127120432d42351630482fc5ec1314543a47769068b1c6a7ef537aa3e64" Mar 13 01:26:42.076962 master-0 kubenswrapper[19803]: I0313 01:26:42.076893 19803 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="1671c753884a85b9d5990bcf5a091faa5ed2c13052477fadfd66f9da210dc6ae" exitCode=0 Mar 13 01:26:42.077112 master-0 kubenswrapper[19803]: I0313 01:26:42.077004 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerDied","Data":"1671c753884a85b9d5990bcf5a091faa5ed2c13052477fadfd66f9da210dc6ae"} Mar 13 01:26:42.077859 master-0 kubenswrapper[19803]: I0313 01:26:42.077804 19803 scope.go:117] "RemoveContainer" containerID="1671c753884a85b9d5990bcf5a091faa5ed2c13052477fadfd66f9da210dc6ae" Mar 13 01:26:42.081420 master-0 kubenswrapper[19803]: I0313 01:26:42.081341 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/0.log" Mar 13 01:26:42.082072 master-0 kubenswrapper[19803]: I0313 01:26:42.082008 19803 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="c6f2c7ce1ebd48d89e8b89aa6f0c61474cf42c8cd887993b37c623a2d414e5fb" exitCode=0 Mar 13 01:26:42.082234 master-0 kubenswrapper[19803]: I0313 01:26:42.082179 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerDied","Data":"c6f2c7ce1ebd48d89e8b89aa6f0c61474cf42c8cd887993b37c623a2d414e5fb"} Mar 13 01:26:42.083196 master-0 kubenswrapper[19803]: I0313 01:26:42.083138 19803 scope.go:117] "RemoveContainer" containerID="c6f2c7ce1ebd48d89e8b89aa6f0c61474cf42c8cd887993b37c623a2d414e5fb" Mar 13 01:26:42.086090 master-0 kubenswrapper[19803]: I0313 01:26:42.086043 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-wk89g_8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/cluster-node-tuning-operator/0.log" Mar 13 01:26:42.086295 master-0 kubenswrapper[19803]: I0313 01:26:42.086132 19803 generic.go:334] "Generic (PLEG): container finished" podID="8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7" containerID="a5eb96a4d4ede22b3223c3ca47936d4bf89e778e44ce7bc9963d80d230415d56" exitCode=1 Mar 13 01:26:42.086295 master-0 kubenswrapper[19803]: I0313 01:26:42.086177 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" event={"ID":"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7","Type":"ContainerDied","Data":"a5eb96a4d4ede22b3223c3ca47936d4bf89e778e44ce7bc9963d80d230415d56"} Mar 13 01:26:42.086973 master-0 kubenswrapper[19803]: I0313 01:26:42.086908 19803 scope.go:117] "RemoveContainer" containerID="a5eb96a4d4ede22b3223c3ca47936d4bf89e778e44ce7bc9963d80d230415d56" Mar 13 01:26:42.094458 master-0 kubenswrapper[19803]: I0313 01:26:42.094277 19803 scope.go:117] "RemoveContainer" containerID="c248d157af93f66dc74e732d276f334cdb9f66f93ff85dda8f8ef75466a1cda2" Mar 13 01:26:42.203929 master-0 kubenswrapper[19803]: I0313 01:26:42.203872 19803 scope.go:117] "RemoveContainer" containerID="5436fbc43037209189594bd015e39350294b9b8da6b6096cb145d36bfb03543f" Mar 13 01:26:42.354342 master-0 kubenswrapper[19803]: I0313 01:26:42.354286 19803 scope.go:117] "RemoveContainer" containerID="544c375d0985569800e6f6387597c6bbdd7b9967f0bc5e80927a60f7a9628d80" Mar 13 01:26:43.098668 master-0 kubenswrapper[19803]: I0313 01:26:43.098270 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operator-lifecycle-manager_package-server-manager-854648ff6d-pj26h_53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59/package-server-manager/0.log" Mar 13 01:26:43.099053 master-0 kubenswrapper[19803]: I0313 01:26:43.098873 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" event={"ID":"53aaf759-2dd0-4d00-a0f6-bd3dbe7dfd59","Type":"ContainerStarted","Data":"9ce8189cfe4102d63c13cb50de8221ed739a8a893afa88267dc37435ee941bfb"} Mar 13 01:26:43.099345 master-0 kubenswrapper[19803]: I0313 01:26:43.099297 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:26:43.102215 master-0 kubenswrapper[19803]: I0313 01:26:43.102169 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-5884b9cd56-8r87t" event={"ID":"77e6cd9e-b6ef-491c-a5c3-60dab81fd752","Type":"ContainerStarted","Data":"ba66937ae6a2462af9de0e54ad7fdab509350c8ac9f5ce6ade55d5f2e5b28ad4"} Mar 13 01:26:43.106389 master-0 kubenswrapper[19803]: I0313 01:26:43.105891 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-fdb5c78b5-fr2dk" event={"ID":"dbcb4b80-425a-4dd5-93a8-bb462f641ef1","Type":"ContainerStarted","Data":"e56e3d7145feb38347e54359da703bdc7de873649d99916b94f5f44d3254e6cc"} Mar 13 01:26:43.119394 master-0 kubenswrapper[19803]: I0313 01:26:43.119334 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5c74bfc494-8fkz8" event={"ID":"c6db75e5-efd1-4bfa-9941-0934d7621ba2","Type":"ContainerStarted","Data":"b230b9dad44a132362c457ccb4d30350936ea3c2d93c4238ef2fd35343bfab92"} Mar 13 01:26:43.129743 master-0 kubenswrapper[19803]: I0313 01:26:43.129484 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-69b6fc6b88-rghrf" event={"ID":"fbfc2caf-126e-41b9-9b31-05f7a45d8536","Type":"ContainerStarted","Data":"c46a8fd4a89cdc75c58219be145bb9ceb05b919b25d4c29ddde34a1df91764b3"} Mar 13 01:26:43.132951 master-0 kubenswrapper[19803]: I0313 01:26:43.132900 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/machine-api-operator/0.log" Mar 13 01:26:43.133942 master-0 kubenswrapper[19803]: I0313 01:26:43.133847 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-84bf6db4f9-rpjkb" event={"ID":"2760a216-fd4b-46d9-a4ec-2d3285ec02bd","Type":"ContainerStarted","Data":"c64679b793d3b04b4fd34ab4488fa2790740e82dee4d75e2e4aae846bc483533"} Mar 13 01:26:43.137721 master-0 kubenswrapper[19803]: I0313 01:26:43.137644 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-olm-operator/cluster-olm-operator-77899cf6d-rzdkn" event={"ID":"250a32b4-cc8d-43fa-9dd1-0a8d85a2739a","Type":"ContainerStarted","Data":"0a16d884756380a61ffdc0f0a4138316b72a8268ce5844bceff57b4301de7c0d"} Mar 13 01:26:43.141827 master-0 kubenswrapper[19803]: I0313 01:26:43.141737 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerStarted","Data":"d80250fb5ca4a81975508f62f493abf4d5670d3f8a8658e6ef23dc6a9d4cbaef"} Mar 13 01:26:43.143110 master-0 kubenswrapper[19803]: I0313 01:26:43.143019 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:26:43.144562 master-0 kubenswrapper[19803]: I0313 01:26:43.144471 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-wk89g_8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/cluster-node-tuning-operator/0.log" Mar 13 01:26:43.144917 master-0 kubenswrapper[19803]: I0313 01:26:43.144641 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-node-tuning-operator/cluster-node-tuning-operator-66c7586884-wk89g" event={"ID":"8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7","Type":"ContainerStarted","Data":"c6984fdf7632036cdb9abacb2cfca3ff5d28745f53f573ee4583e816c0da9e04"} Mar 13 01:26:43.148484 master-0 kubenswrapper[19803]: I0313 01:26:43.148436 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-ff46b7bdf-nnjxp" event={"ID":"c55a215a-9a95-4f48-8668-9b76503c3044","Type":"ContainerStarted","Data":"35166ac7a97fa3f23b97a439e8538d27588afdeb3986b0d5678799c57b68b3b2"} Mar 13 01:26:43.151486 master-0 kubenswrapper[19803]: I0313 01:26:43.151399 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86d6d77c7c-6vvzl" event={"ID":"91fc568a-61ad-400e-a54e-21d62e51bb17","Type":"ContainerStarted","Data":"4da1b2fa952b4446f11267b0d404cd5dc591db280e4b5139543b57e6327e2b40"} Mar 13 01:26:43.160023 master-0 kubenswrapper[19803]: I0313 01:26:43.159978 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"cdcecc61ff5eeb08bd2a3ac12599e4f9","Type":"ContainerStarted","Data":"e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3"} Mar 13 01:26:43.199992 master-0 kubenswrapper[19803]: E0313 01:26:43.199863 19803 log.go:32] "RunPodSandbox from runtime service failed" err=< Mar 13 01:26:43.199992 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e" Netns:"/var/run/netns/3fc870aa-0bf6-482c-be94-4456d25a863b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0) Mar 13 01:26:43.199992 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:26:43.199992 master-0 kubenswrapper[19803]: > Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: E0313 01:26:43.200063 19803 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err=< Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e" Netns:"/var/run/netns/3fc870aa-0bf6-482c-be94-4456d25a863b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0) Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: E0313 01:26:43.200116 19803 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err=< Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network "multus-cni-network": plugin type="multus-shim" name="multus-cni-network" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:"d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e" Netns:"/var/run/netns/3fc870aa-0bf6-482c-be94-4456d25a863b" IfName:"eth0" Args:"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187" Path:"" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0) Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: ': StdinData: {"binDir":"/var/lib/cni/bin","clusterNetwork":"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf","cniVersion":"0.3.1","daemonSocketDir":"/run/multus/socket","globalNamespaces":"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv","logLevel":"verbose","logToStderr":true,"name":"multus-cni-network","namespaceIsolation":true,"type":"multus-shim"} Mar 13 01:26:43.200439 master-0 kubenswrapper[19803]: > pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:26:43.200750 master-0 kubenswrapper[19803]: E0313 01:26:43.200649 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"prometheus-k8s-0_openshift-monitoring(80dda8c5-33c6-46ba-b4fa-8e4877de9187)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_prometheus-k8s-0_openshift-monitoring_80dda8c5-33c6-46ba-b4fa-8e4877de9187_0(d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e): error adding pod openshift-monitoring_prometheus-k8s-0 to CNI network \\\"multus-cni-network\\\": plugin type=\\\"multus-shim\\\" name=\\\"multus-cni-network\\\" failed (add): CmdAdd (shim): CNI request failed with status 400: 'ContainerID:\\\"d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e\\\" Netns:\\\"/var/run/netns/3fc870aa-0bf6-482c-be94-4456d25a863b\\\" IfName:\\\"eth0\\\" Args:\\\"IgnoreUnknown=1;K8S_POD_NAMESPACE=openshift-monitoring;K8S_POD_NAME=prometheus-k8s-0;K8S_POD_INFRA_CONTAINER_ID=d4f3d9f3e0326616f8ddfa120e8c4f43adc6b98601f5939fc7d229906f6ccd1e;K8S_POD_UID=80dda8c5-33c6-46ba-b4fa-8e4877de9187\\\" Path:\\\"\\\" ERRORED: error configuring pod [openshift-monitoring/prometheus-k8s-0] networking: Multus: [openshift-monitoring/prometheus-k8s-0/80dda8c5-33c6-46ba-b4fa-8e4877de9187]: error setting the networks status: SetPodNetworkStatusAnnotation: failed to update the pod prometheus-k8s-0 in out of cluster comm: SetNetworkStatus: failed to update the pod prometheus-k8s-0 in out of cluster comm: status update failed for pod /: the server was unable to return a response in the time allotted, but may still be processing the request (get pods prometheus-k8s-0)\\n': StdinData: {\\\"binDir\\\":\\\"/var/lib/cni/bin\\\",\\\"clusterNetwork\\\":\\\"/host/run/multus/cni/net.d/10-ovn-kubernetes.conf\\\",\\\"cniVersion\\\":\\\"0.3.1\\\",\\\"daemonSocketDir\\\":\\\"/run/multus/socket\\\",\\\"globalNamespaces\\\":\\\"default,openshift-multus,openshift-sriov-network-operator,openshift-cnv\\\",\\\"logLevel\\\":\\\"verbose\\\",\\\"logToStderr\\\":true,\\\"name\\\":\\\"multus-cni-network\\\",\\\"namespaceIsolation\\\":true,\\\"type\\\":\\\"multus-shim\\\"}\"" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" Mar 13 01:26:44.172175 master-0 kubenswrapper[19803]: I0313 01:26:44.172107 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/2.log" Mar 13 01:26:44.173425 master-0 kubenswrapper[19803]: I0313 01:26:44.173366 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/1.log" Mar 13 01:26:44.173997 master-0 kubenswrapper[19803]: I0313 01:26:44.173953 19803 generic.go:334] "Generic (PLEG): container finished" podID="21110b48-25fc-434a-b156-7f6bd6064bed" containerID="0135ba4974603368e77b84b78bf31b903fe3d5cdbea51d504385ae47de44d443" exitCode=1 Mar 13 01:26:44.174068 master-0 kubenswrapper[19803]: I0313 01:26:44.174009 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerDied","Data":"0135ba4974603368e77b84b78bf31b903fe3d5cdbea51d504385ae47de44d443"} Mar 13 01:26:44.174103 master-0 kubenswrapper[19803]: I0313 01:26:44.174078 19803 scope.go:117] "RemoveContainer" containerID="a06159b44add11f1f640a62febc09ee506ed8ed487eaccded51fcfacb9d58f8c" Mar 13 01:26:44.175214 master-0 kubenswrapper[19803]: I0313 01:26:44.175138 19803 scope.go:117] "RemoveContainer" containerID="0135ba4974603368e77b84b78bf31b903fe3d5cdbea51d504385ae47de44d443" Mar 13 01:26:44.175926 master-0 kubenswrapper[19803]: E0313 01:26:44.175856 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" podUID="21110b48-25fc-434a-b156-7f6bd6064bed" Mar 13 01:26:44.176111 master-0 kubenswrapper[19803]: I0313 01:26:44.176074 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:26:44.176941 master-0 kubenswrapper[19803]: I0313 01:26:44.176907 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:26:44.316651 master-0 kubenswrapper[19803]: I0313 01:26:44.315233 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:26:44.316651 master-0 kubenswrapper[19803]: E0313 01:26:44.315721 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-policy-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=cluster-policy-controller pod=kube-controller-manager-master-0_openshift-kube-controller-manager(24e04786030519cf5fd9f600ea6710e9)\"" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" Mar 13 01:26:45.186011 master-0 kubenswrapper[19803]: I0313 01:26:45.185927 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/2.log" Mar 13 01:26:47.169875 master-0 kubenswrapper[19803]: I0313 01:26:47.169722 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:47.171006 master-0 kubenswrapper[19803]: I0313 01:26:47.169904 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:47.693148 master-0 kubenswrapper[19803]: I0313 01:26:47.693049 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:47.693403 master-0 kubenswrapper[19803]: I0313 01:26:47.693172 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:50.072443 master-0 kubenswrapper[19803]: E0313 01:26:50.072379 19803 kubelet_node_status.go:585] "Error updating node status, will retry" err="error getting node \"master-0\": Get \"https://api-int.sno.openstack.lab:6443/api/v1/nodes/master-0?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Mar 13 01:26:50.168931 master-0 kubenswrapper[19803]: I0313 01:26:50.168851 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:50.169215 master-0 kubenswrapper[19803]: I0313 01:26:50.168943 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:50.693194 master-0 kubenswrapper[19803]: I0313 01:26:50.693105 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:50.693414 master-0 kubenswrapper[19803]: I0313 01:26:50.693255 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:51.315302 master-0 kubenswrapper[19803]: I0313 01:26:51.315218 19803 scope.go:117] "RemoveContainer" containerID="cf9aa79d7c848ce2a6eacd046ecf770aa18df700f052345baaa9f27169958c79" Mar 13 01:26:51.316480 master-0 kubenswrapper[19803]: E0313 01:26:51.316197 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"snapshot-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=snapshot-controller pod=csi-snapshot-controller-7577d6f48-bj5ld_openshift-cluster-storage-operator(0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a)\"" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" podUID="0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a" Mar 13 01:26:53.169281 master-0 kubenswrapper[19803]: I0313 01:26:53.169194 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:53.170286 master-0 kubenswrapper[19803]: I0313 01:26:53.170235 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:53.170501 master-0 kubenswrapper[19803]: I0313 01:26:53.170470 19803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:26:53.171885 master-0 kubenswrapper[19803]: I0313 01:26:53.171842 19803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"d80250fb5ca4a81975508f62f493abf4d5670d3f8a8658e6ef23dc6a9d4cbaef"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 01:26:53.172168 master-0 kubenswrapper[19803]: I0313 01:26:53.171842 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:53.172284 master-0 kubenswrapper[19803]: I0313 01:26:53.172093 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" containerID="cri-o://d80250fb5ca4a81975508f62f493abf4d5670d3f8a8658e6ef23dc6a9d4cbaef" gracePeriod=30 Mar 13 01:26:53.172284 master-0 kubenswrapper[19803]: I0313 01:26:53.172223 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:53.263484 master-0 kubenswrapper[19803]: I0313 01:26:53.263375 19803 generic.go:334] "Generic (PLEG): container finished" podID="b3bf9dde-ca5b-46b8-883c-51e88ddf52e1" containerID="4f14fab0dbb3eda2a307a2d270febfa72f62097bfd703e6c81d2be48ab7a51a0" exitCode=0 Mar 13 01:26:53.263484 master-0 kubenswrapper[19803]: I0313 01:26:53.263459 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" event={"ID":"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1","Type":"ContainerDied","Data":"4f14fab0dbb3eda2a307a2d270febfa72f62097bfd703e6c81d2be48ab7a51a0"} Mar 13 01:26:53.264210 master-0 kubenswrapper[19803]: I0313 01:26:53.264151 19803 scope.go:117] "RemoveContainer" containerID="4f14fab0dbb3eda2a307a2d270febfa72f62097bfd703e6c81d2be48ab7a51a0" Mar 13 01:26:53.694945 master-0 kubenswrapper[19803]: I0313 01:26:53.694870 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:53.695109 master-0 kubenswrapper[19803]: I0313 01:26:53.694998 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:53.801792 master-0 kubenswrapper[19803]: E0313 01:26:53.801733 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Timeout: request did not complete within requested timeout - context deadline exceeded" pod="openshift-etcd/etcd-master-0" Mar 13 01:26:54.274570 master-0 kubenswrapper[19803]: I0313 01:26:54.274472 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-8c9c967c7-jzj9v" event={"ID":"b3bf9dde-ca5b-46b8-883c-51e88ddf52e1","Type":"ContainerStarted","Data":"9ee6ceeb66571ec704047c38c04db9eaa317d6d38dd23b4e250632914949cb3a"} Mar 13 01:26:54.278277 master-0 kubenswrapper[19803]: I0313 01:26:54.278221 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/2.log" Mar 13 01:26:54.279434 master-0 kubenswrapper[19803]: I0313 01:26:54.279383 19803 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="d80250fb5ca4a81975508f62f493abf4d5670d3f8a8658e6ef23dc6a9d4cbaef" exitCode=255 Mar 13 01:26:54.279434 master-0 kubenswrapper[19803]: I0313 01:26:54.279451 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerDied","Data":"d80250fb5ca4a81975508f62f493abf4d5670d3f8a8658e6ef23dc6a9d4cbaef"} Mar 13 01:26:54.279726 master-0 kubenswrapper[19803]: I0313 01:26:54.279502 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerStarted","Data":"eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf"} Mar 13 01:26:54.279726 master-0 kubenswrapper[19803]: I0313 01:26:54.279560 19803 scope.go:117] "RemoveContainer" containerID="c6f2c7ce1ebd48d89e8b89aa6f0c61474cf42c8cd887993b37c623a2d414e5fb" Mar 13 01:26:54.280080 master-0 kubenswrapper[19803]: I0313 01:26:54.279986 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:26:55.288812 master-0 kubenswrapper[19803]: I0313 01:26:55.288751 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/2.log" Mar 13 01:26:55.314263 master-0 kubenswrapper[19803]: I0313 01:26:55.314208 19803 scope.go:117] "RemoveContainer" containerID="0135ba4974603368e77b84b78bf31b903fe3d5cdbea51d504385ae47de44d443" Mar 13 01:26:55.314415 master-0 kubenswrapper[19803]: I0313 01:26:55.314400 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:26:55.314464 master-0 kubenswrapper[19803]: E0313 01:26:55.314412 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cluster-baremetal-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=cluster-baremetal-operator pod=cluster-baremetal-operator-5cdb4c5598-5dvnt_openshift-machine-api(21110b48-25fc-434a-b156-7f6bd6064bed)\"" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" podUID="21110b48-25fc-434a-b156-7f6bd6064bed" Mar 13 01:26:56.170086 master-0 kubenswrapper[19803]: I0313 01:26:56.169946 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:56.170409 master-0 kubenswrapper[19803]: I0313 01:26:56.170073 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:56.303663 master-0 kubenswrapper[19803]: I0313 01:26:56.303559 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/3.log" Mar 13 01:26:56.306561 master-0 kubenswrapper[19803]: I0313 01:26:56.306450 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:26:56.306709 master-0 kubenswrapper[19803]: I0313 01:26:56.306614 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064"} Mar 13 01:26:56.694100 master-0 kubenswrapper[19803]: I0313 01:26:56.693961 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:56.694468 master-0 kubenswrapper[19803]: I0313 01:26:56.694123 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:59.169280 master-0 kubenswrapper[19803]: I0313 01:26:59.169135 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:59.169280 master-0 kubenswrapper[19803]: I0313 01:26:59.169256 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:26:59.693832 master-0 kubenswrapper[19803]: I0313 01:26:59.693715 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:26:59.694076 master-0 kubenswrapper[19803]: I0313 01:26:59.693842 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:27:00.192544 master-0 kubenswrapper[19803]: I0313 01:27:00.191637 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:27:01.479549 master-0 kubenswrapper[19803]: I0313 01:27:01.479111 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:27:01.485633 master-0 kubenswrapper[19803]: I0313 01:27:01.484244 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:27:02.166703 master-0 kubenswrapper[19803]: W0313 01:27:02.166653 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b300a46_0e04_4109_a370_2589ce3efa0c.slice/crio-bf411ce7f4ec34805ce73543df45f7c685ae3232f316f9344feaa3b9efe22bb7 WatchSource:0}: Error finding container bf411ce7f4ec34805ce73543df45f7c685ae3232f316f9344feaa3b9efe22bb7: Status 404 returned error can't find the container with id bf411ce7f4ec34805ce73543df45f7c685ae3232f316f9344feaa3b9efe22bb7 Mar 13 01:27:02.168791 master-0 kubenswrapper[19803]: I0313 01:27:02.168731 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:27:02.168895 master-0 kubenswrapper[19803]: I0313 01:27:02.168837 19803 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:27:02.168988 master-0 kubenswrapper[19803]: I0313 01:27:02.168947 19803 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:27:02.169999 master-0 kubenswrapper[19803]: W0313 01:27:02.169467 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80dda8c5_33c6_46ba_b4fa_8e4877de9187.slice/crio-00f8919fb307757ca915c63b29d25cbc015314bd7fd310f0d1b2c388c59e3462 WatchSource:0}: Error finding container 00f8919fb307757ca915c63b29d25cbc015314bd7fd310f0d1b2c388c59e3462: Status 404 returned error can't find the container with id 00f8919fb307757ca915c63b29d25cbc015314bd7fd310f0d1b2c388c59e3462 Mar 13 01:27:02.169999 master-0 kubenswrapper[19803]: I0313 01:27:02.169679 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:27:02.169999 master-0 kubenswrapper[19803]: I0313 01:27:02.169716 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:27:02.169999 master-0 kubenswrapper[19803]: I0313 01:27:02.169982 19803 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="openshift-config-operator" containerStatusID={"Type":"cri-o","ID":"eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf"} pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" containerMessage="Container openshift-config-operator failed liveness probe, will be restarted" Mar 13 01:27:02.170178 master-0 kubenswrapper[19803]: I0313 01:27:02.170006 19803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 01:27:02.170178 master-0 kubenswrapper[19803]: I0313 01:27:02.170033 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" containerID="cri-o://eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf" gracePeriod=30 Mar 13 01:27:02.197738 master-0 kubenswrapper[19803]: I0313 01:27:02.197638 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 01:27:02.205300 master-0 kubenswrapper[19803]: I0313 01:27:02.203641 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 01:27:02.206127 master-0 kubenswrapper[19803]: I0313 01:27:02.206041 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-84f57b9877-ffb2n" podStartSLOduration=263.328936421 podStartE2EDuration="4m56.206018531s" podCreationTimestamp="2026-03-13 01:22:06 +0000 UTC" firstStartedPulling="2026-03-13 01:22:07.645019615 +0000 UTC m=+275.610167304" lastFinishedPulling="2026-03-13 01:22:40.522101695 +0000 UTC m=+308.487249414" observedRunningTime="2026-03-13 01:27:02.187391655 +0000 UTC m=+570.152539354" watchObservedRunningTime="2026-03-13 01:27:02.206018531 +0000 UTC m=+570.171166220" Mar 13 01:27:02.399567 master-0 kubenswrapper[19803]: I0313 01:27:02.398664 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddtwn"] Mar 13 01:27:02.418622 master-0 kubenswrapper[19803]: I0313 01:27:02.409212 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/multus-admission-controller-8d675b596-ddtwn"] Mar 13 01:27:02.428110 master-0 kubenswrapper[19803]: I0313 01:27:02.422917 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerStarted","Data":"bf411ce7f4ec34805ce73543df45f7c685ae3232f316f9344feaa3b9efe22bb7"} Mar 13 01:27:02.459297 master-0 kubenswrapper[19803]: I0313 01:27:02.456561 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerStarted","Data":"00f8919fb307757ca915c63b29d25cbc015314bd7fd310f0d1b2c388c59e3462"} Mar 13 01:27:02.588537 master-0 kubenswrapper[19803]: E0313 01:27:02.588464 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-trr9r_openshift-config-operator(6fd82994-f4d4-49e9-8742-07e206322e76)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" Mar 13 01:27:02.693198 master-0 kubenswrapper[19803]: I0313 01:27:02.693034 19803 patch_prober.go:28] interesting pod/openshift-config-operator-64488f9d78-trr9r container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" start-of-body= Mar 13 01:27:02.693198 master-0 kubenswrapper[19803]: I0313 01:27:02.693117 19803 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.128.0.9:8443/healthz\": dial tcp 10.128.0.9:8443: connect: connection refused" Mar 13 01:27:03.249925 master-0 kubenswrapper[19803]: I0313 01:27:03.249856 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-thhrl"] Mar 13 01:27:03.254419 master-0 kubenswrapper[19803]: I0313 01:27:03.252665 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-thhrl"] Mar 13 01:27:03.465824 master-0 kubenswrapper[19803]: I0313 01:27:03.465754 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/3.log" Mar 13 01:27:03.466624 master-0 kubenswrapper[19803]: I0313 01:27:03.466580 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/2.log" Mar 13 01:27:03.467305 master-0 kubenswrapper[19803]: I0313 01:27:03.467245 19803 generic.go:334] "Generic (PLEG): container finished" podID="6fd82994-f4d4-49e9-8742-07e206322e76" containerID="eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf" exitCode=255 Mar 13 01:27:03.467387 master-0 kubenswrapper[19803]: I0313 01:27:03.467333 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerDied","Data":"eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf"} Mar 13 01:27:03.467434 master-0 kubenswrapper[19803]: I0313 01:27:03.467422 19803 scope.go:117] "RemoveContainer" containerID="d80250fb5ca4a81975508f62f493abf4d5670d3f8a8658e6ef23dc6a9d4cbaef" Mar 13 01:27:03.468588 master-0 kubenswrapper[19803]: I0313 01:27:03.468543 19803 scope.go:117] "RemoveContainer" containerID="eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf" Mar 13 01:27:03.468850 master-0 kubenswrapper[19803]: E0313 01:27:03.468808 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-trr9r_openshift-config-operator(6fd82994-f4d4-49e9-8742-07e206322e76)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" Mar 13 01:27:04.324124 master-0 kubenswrapper[19803]: I0313 01:27:04.324014 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" path="/var/lib/kubelet/pods/161d2fa6-a541-427a-a3e9-3297102a26f5/volumes" Mar 13 01:27:04.324894 master-0 kubenswrapper[19803]: I0313 01:27:04.324759 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" path="/var/lib/kubelet/pods/4626655d-add4-4cbd-9ba7-7082f63db442/volumes" Mar 13 01:27:04.475427 master-0 kubenswrapper[19803]: I0313 01:27:04.475351 19803 generic.go:334] "Generic (PLEG): container finished" podID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerID="e46867efd75c4c36750d2b1cae22396c86c896f1e63811cbdf54fe789741c60d" exitCode=0 Mar 13 01:27:04.475881 master-0 kubenswrapper[19803]: I0313 01:27:04.475831 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"e46867efd75c4c36750d2b1cae22396c86c896f1e63811cbdf54fe789741c60d"} Mar 13 01:27:04.477951 master-0 kubenswrapper[19803]: I0313 01:27:04.477895 19803 generic.go:334] "Generic (PLEG): container finished" podID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerID="327852a029bfd0e834d21248720570e2f7ef7a434e195599fde2db98c26f8e41" exitCode=0 Mar 13 01:27:04.478017 master-0 kubenswrapper[19803]: I0313 01:27:04.477967 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"327852a029bfd0e834d21248720570e2f7ef7a434e195599fde2db98c26f8e41"} Mar 13 01:27:04.483977 master-0 kubenswrapper[19803]: I0313 01:27:04.483921 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/3.log" Mar 13 01:27:06.316139 master-0 kubenswrapper[19803]: I0313 01:27:06.315643 19803 scope.go:117] "RemoveContainer" containerID="cf9aa79d7c848ce2a6eacd046ecf770aa18df700f052345baaa9f27169958c79" Mar 13 01:27:07.511797 master-0 kubenswrapper[19803]: I0313 01:27:07.511736 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/4.log" Mar 13 01:27:07.512486 master-0 kubenswrapper[19803]: I0313 01:27:07.511969 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-storage-operator/csi-snapshot-controller-7577d6f48-bj5ld" event={"ID":"0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a","Type":"ContainerStarted","Data":"f2eb823507a574b66808fe8bf1077551f411e2b4c82f011bebab916c0317ef11"} Mar 13 01:27:07.516604 master-0 kubenswrapper[19803]: I0313 01:27:07.516547 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerStarted","Data":"87e7d839ee2a53e1c3f74a54b26e92cfb08db8934c88bf727c2b174e10eaeb14"} Mar 13 01:27:08.314194 master-0 kubenswrapper[19803]: I0313 01:27:08.314141 19803 scope.go:117] "RemoveContainer" containerID="0135ba4974603368e77b84b78bf31b903fe3d5cdbea51d504385ae47de44d443" Mar 13 01:27:09.537091 master-0 kubenswrapper[19803]: I0313 01:27:09.537018 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerStarted","Data":"af3dea87089055ed8ff0a504beb660d463839e3f0a89b7384e4a83b81ca39cd2"} Mar 13 01:27:09.538472 master-0 kubenswrapper[19803]: I0313 01:27:09.538452 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerStarted","Data":"bf44bf0654c243447f5c2eddd5cb8108dd3746163d5c74fb0917f512b255102e"} Mar 13 01:27:09.538591 master-0 kubenswrapper[19803]: I0313 01:27:09.538573 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerStarted","Data":"aae2a34209a7f70578604cbdaf885049b779a8cdbb0f4b62cc513666e9bd8b15"} Mar 13 01:27:09.538799 master-0 kubenswrapper[19803]: I0313 01:27:09.538777 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/2.log" Mar 13 01:27:09.540460 master-0 kubenswrapper[19803]: I0313 01:27:09.539149 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/cluster-baremetal-operator-5cdb4c5598-5dvnt" event={"ID":"21110b48-25fc-434a-b156-7f6bd6064bed","Type":"ContainerStarted","Data":"352996990ebb5d5a84c0bf6c31512cdb16c2a8cac4d1038cae30a87387cc7b18"} Mar 13 01:27:09.543991 master-0 kubenswrapper[19803]: I0313 01:27:09.543902 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerStarted","Data":"2c37e2d8ff924d4aa20df6157bfd1b832becd3b10c0519ea45391c27ebf82fc4"} Mar 13 01:27:09.543991 master-0 kubenswrapper[19803]: I0313 01:27:09.543957 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerStarted","Data":"ec7d02065e35d431513db822623266aaf8c95259a309a1005f171fe05f0e637e"} Mar 13 01:27:10.203891 master-0 kubenswrapper[19803]: I0313 01:27:10.203710 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:27:10.560645 master-0 kubenswrapper[19803]: I0313 01:27:10.560495 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerStarted","Data":"3a0472a659129f987f1d91c84295078b1dadc74543f77b94adee51424a3773b8"} Mar 13 01:27:10.560645 master-0 kubenswrapper[19803]: I0313 01:27:10.560647 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerStarted","Data":"c1f4c96a645b26f09b2c0582119a2127c438791eb500b75817b09119417c519f"} Mar 13 01:27:10.566702 master-0 kubenswrapper[19803]: I0313 01:27:10.566635 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerStarted","Data":"0a088a6c8e4a4ccc35a614fd9f7ebc52c2972439147464ce8f71cfb707b2d4df"} Mar 13 01:27:10.566789 master-0 kubenswrapper[19803]: I0313 01:27:10.566708 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerStarted","Data":"85197c0ec36988e7a55e25e57f92181f596578fbe76bd9378133dae8329fd79d"} Mar 13 01:27:10.566789 master-0 kubenswrapper[19803]: I0313 01:27:10.566723 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerStarted","Data":"6d6124c93746c8e1229ad1de12d71fb75ec7fe2a0b05142a6d2ec0842ec2a4e8"} Mar 13 01:27:10.566789 master-0 kubenswrapper[19803]: I0313 01:27:10.566734 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerStarted","Data":"0026657e78430f9b6b6d2a35bd7651b500cfaf15f0b053eff7c7f69c0cbf7516"} Mar 13 01:27:10.601060 master-0 kubenswrapper[19803]: I0313 01:27:10.600950 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=532.959300486 podStartE2EDuration="8m57.600913435s" podCreationTimestamp="2026-03-13 01:18:13 +0000 UTC" firstStartedPulling="2026-03-13 01:27:02.169887027 +0000 UTC m=+570.135034716" lastFinishedPulling="2026-03-13 01:27:06.811499986 +0000 UTC m=+574.776647665" observedRunningTime="2026-03-13 01:27:10.593901363 +0000 UTC m=+578.559049152" watchObservedRunningTime="2026-03-13 01:27:10.600913435 +0000 UTC m=+578.566061114" Mar 13 01:27:10.650918 master-0 kubenswrapper[19803]: I0313 01:27:10.610205 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:27:10.669341 master-0 kubenswrapper[19803]: I0313 01:27:10.669088 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=523.859085655 podStartE2EDuration="8m50.669067973s" podCreationTimestamp="2026-03-13 01:18:20 +0000 UTC" firstStartedPulling="2026-03-13 01:27:02.172712046 +0000 UTC m=+570.137859735" lastFinishedPulling="2026-03-13 01:27:08.982694374 +0000 UTC m=+576.947842053" observedRunningTime="2026-03-13 01:27:10.668719755 +0000 UTC m=+578.633867474" watchObservedRunningTime="2026-03-13 01:27:10.669067973 +0000 UTC m=+578.634215652" Mar 13 01:27:14.516703 master-0 kubenswrapper[19803]: I0313 01:27:14.516581 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-854648ff6d-pj26h" Mar 13 01:27:17.315355 master-0 kubenswrapper[19803]: I0313 01:27:17.315268 19803 scope.go:117] "RemoveContainer" containerID="eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf" Mar 13 01:27:17.316537 master-0 kubenswrapper[19803]: E0313 01:27:17.315758 19803 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"openshift-config-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=openshift-config-operator pod=openshift-config-operator-64488f9d78-trr9r_openshift-config-operator(6fd82994-f4d4-49e9-8742-07e206322e76)\"" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" podUID="6fd82994-f4d4-49e9-8742-07e206322e76" Mar 13 01:27:18.734831 master-0 kubenswrapper[19803]: I0313 01:27:18.734741 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 01:27:18.735765 master-0 kubenswrapper[19803]: E0313 01:27:18.735712 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" containerName="kube-multus-additional-cni-plugins" Mar 13 01:27:18.735765 master-0 kubenswrapper[19803]: I0313 01:27:18.735746 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" containerName="kube-multus-additional-cni-plugins" Mar 13 01:27:18.735765 master-0 kubenswrapper[19803]: E0313 01:27:18.735770 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="multus-admission-controller" Mar 13 01:27:18.736029 master-0 kubenswrapper[19803]: I0313 01:27:18.735785 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="multus-admission-controller" Mar 13 01:27:18.736029 master-0 kubenswrapper[19803]: E0313 01:27:18.735850 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd3a989f-6c19-4f5d-b14f-369ed9941051" containerName="installer" Mar 13 01:27:18.736029 master-0 kubenswrapper[19803]: I0313 01:27:18.735863 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd3a989f-6c19-4f5d-b14f-369ed9941051" containerName="installer" Mar 13 01:27:18.736029 master-0 kubenswrapper[19803]: E0313 01:27:18.735891 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6d93d3d-2899-4962-a25a-712e2fb9584b" containerName="installer" Mar 13 01:27:18.736029 master-0 kubenswrapper[19803]: I0313 01:27:18.735904 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6d93d3d-2899-4962-a25a-712e2fb9584b" containerName="installer" Mar 13 01:27:18.736029 master-0 kubenswrapper[19803]: E0313 01:27:18.735930 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="kube-rbac-proxy" Mar 13 01:27:18.736029 master-0 kubenswrapper[19803]: I0313 01:27:18.735942 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="kube-rbac-proxy" Mar 13 01:27:18.736488 master-0 kubenswrapper[19803]: I0313 01:27:18.736450 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="kube-rbac-proxy" Mar 13 01:27:18.736623 master-0 kubenswrapper[19803]: I0313 01:27:18.736494 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4626655d-add4-4cbd-9ba7-7082f63db442" containerName="kube-multus-additional-cni-plugins" Mar 13 01:27:18.736623 master-0 kubenswrapper[19803]: I0313 01:27:18.736566 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="161d2fa6-a541-427a-a3e9-3297102a26f5" containerName="multus-admission-controller" Mar 13 01:27:18.736623 master-0 kubenswrapper[19803]: I0313 01:27:18.736606 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd3a989f-6c19-4f5d-b14f-369ed9941051" containerName="installer" Mar 13 01:27:18.736826 master-0 kubenswrapper[19803]: I0313 01:27:18.736629 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6d93d3d-2899-4962-a25a-712e2fb9584b" containerName="installer" Mar 13 01:27:18.737813 master-0 kubenswrapper[19803]: I0313 01:27:18.737765 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.743182 master-0 kubenswrapper[19803]: I0313 01:27:18.742570 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-9gbgx" Mar 13 01:27:18.743182 master-0 kubenswrapper[19803]: I0313 01:27:18.742919 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Mar 13 01:27:18.752088 master-0 kubenswrapper[19803]: I0313 01:27:18.751993 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 01:27:18.822543 master-0 kubenswrapper[19803]: I0313 01:27:18.821825 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.823171 master-0 kubenswrapper[19803]: I0313 01:27:18.823064 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4812756b-4eb5-45bf-beb3-f78be74eaec4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.823634 master-0 kubenswrapper[19803]: I0313 01:27:18.823559 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-var-lock\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.927541 master-0 kubenswrapper[19803]: I0313 01:27:18.927455 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4812756b-4eb5-45bf-beb3-f78be74eaec4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.927901 master-0 kubenswrapper[19803]: I0313 01:27:18.927728 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-var-lock\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.927901 master-0 kubenswrapper[19803]: I0313 01:27:18.927830 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.927901 master-0 kubenswrapper[19803]: I0313 01:27:18.927842 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-var-lock\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.927901 master-0 kubenswrapper[19803]: I0313 01:27:18.927897 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:18.947329 master-0 kubenswrapper[19803]: I0313 01:27:18.947268 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4812756b-4eb5-45bf-beb3-f78be74eaec4-kube-api-access\") pod \"installer-3-master-0\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:19.096154 master-0 kubenswrapper[19803]: I0313 01:27:19.095928 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:19.627689 master-0 kubenswrapper[19803]: I0313 01:27:19.627607 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 01:27:19.635989 master-0 kubenswrapper[19803]: W0313 01:27:19.635914 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4812756b_4eb5_45bf_beb3_f78be74eaec4.slice/crio-76ba75bd024a1f0ecddd642b86f8a1972aa33643735dc325cf7e846630fdb314 WatchSource:0}: Error finding container 76ba75bd024a1f0ecddd642b86f8a1972aa33643735dc325cf7e846630fdb314: Status 404 returned error can't find the container with id 76ba75bd024a1f0ecddd642b86f8a1972aa33643735dc325cf7e846630fdb314 Mar 13 01:27:19.666230 master-0 kubenswrapper[19803]: I0313 01:27:19.665998 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4812756b-4eb5-45bf-beb3-f78be74eaec4","Type":"ContainerStarted","Data":"76ba75bd024a1f0ecddd642b86f8a1972aa33643735dc325cf7e846630fdb314"} Mar 13 01:27:20.028074 master-0 kubenswrapper[19803]: I0313 01:27:20.027992 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 13 01:27:20.029149 master-0 kubenswrapper[19803]: I0313 01:27:20.029117 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.032346 master-0 kubenswrapper[19803]: I0313 01:27:20.032290 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler"/"installer-sa-dockercfg-v4fzd" Mar 13 01:27:20.033155 master-0 kubenswrapper[19803]: I0313 01:27:20.033118 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler"/"kube-root-ca.crt" Mar 13 01:27:20.047247 master-0 kubenswrapper[19803]: I0313 01:27:20.047193 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.047347 master-0 kubenswrapper[19803]: I0313 01:27:20.047326 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.047587 master-0 kubenswrapper[19803]: I0313 01:27:20.047504 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.052459 master-0 kubenswrapper[19803]: I0313 01:27:20.052418 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 13 01:27:20.148605 master-0 kubenswrapper[19803]: I0313 01:27:20.148568 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.148812 master-0 kubenswrapper[19803]: I0313 01:27:20.148794 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.148942 master-0 kubenswrapper[19803]: I0313 01:27:20.148928 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.149349 master-0 kubenswrapper[19803]: I0313 01:27:20.149333 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kubelet-dir\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.149465 master-0 kubenswrapper[19803]: I0313 01:27:20.149452 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-var-lock\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.169663 master-0 kubenswrapper[19803]: I0313 01:27:20.169561 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kube-api-access\") pod \"installer-5-retry-1-master-0\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.386992 master-0 kubenswrapper[19803]: I0313 01:27:20.386900 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:20.684318 master-0 kubenswrapper[19803]: I0313 01:27:20.684251 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4812756b-4eb5-45bf-beb3-f78be74eaec4","Type":"ContainerStarted","Data":"f175308dc25e5a25aa95d958e990b4b066e7d36b4a48358eb4045f787948e696"} Mar 13 01:27:20.711613 master-0 kubenswrapper[19803]: I0313 01:27:20.711245 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-3-master-0" podStartSLOduration=2.711211583 podStartE2EDuration="2.711211583s" podCreationTimestamp="2026-03-13 01:27:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:27:20.708115217 +0000 UTC m=+588.673262926" watchObservedRunningTime="2026-03-13 01:27:20.711211583 +0000 UTC m=+588.676359302" Mar 13 01:27:20.927851 master-0 kubenswrapper[19803]: I0313 01:27:20.927786 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/installer-5-retry-1-master-0"] Mar 13 01:27:21.058537 master-0 kubenswrapper[19803]: I0313 01:27:21.057454 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 01:27:21.058537 master-0 kubenswrapper[19803]: I0313 01:27:21.057964 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="alertmanager" containerID="cri-o://87e7d839ee2a53e1c3f74a54b26e92cfb08db8934c88bf727c2b174e10eaeb14" gracePeriod=120 Mar 13 01:27:21.058537 master-0 kubenswrapper[19803]: I0313 01:27:21.058022 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-metric" containerID="cri-o://c1f4c96a645b26f09b2c0582119a2127c438791eb500b75817b09119417c519f" gracePeriod=120 Mar 13 01:27:21.058537 master-0 kubenswrapper[19803]: I0313 01:27:21.058117 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy" containerID="cri-o://af3dea87089055ed8ff0a504beb660d463839e3f0a89b7384e4a83b81ca39cd2" gracePeriod=120 Mar 13 01:27:21.058537 master-0 kubenswrapper[19803]: I0313 01:27:21.058190 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-web" containerID="cri-o://bf44bf0654c243447f5c2eddd5cb8108dd3746163d5c74fb0917f512b255102e" gracePeriod=120 Mar 13 01:27:21.058537 master-0 kubenswrapper[19803]: I0313 01:27:21.058247 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="config-reloader" containerID="cri-o://aae2a34209a7f70578604cbdaf885049b779a8cdbb0f4b62cc513666e9bd8b15" gracePeriod=120 Mar 13 01:27:21.058537 master-0 kubenswrapper[19803]: I0313 01:27:21.058428 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/alertmanager-main-0" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="prom-label-proxy" containerID="cri-o://3a0472a659129f987f1d91c84295078b1dadc74543f77b94adee51424a3773b8" gracePeriod=120 Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718179 19803 generic.go:334] "Generic (PLEG): container finished" podID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerID="3a0472a659129f987f1d91c84295078b1dadc74543f77b94adee51424a3773b8" exitCode=0 Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718222 19803 generic.go:334] "Generic (PLEG): container finished" podID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerID="c1f4c96a645b26f09b2c0582119a2127c438791eb500b75817b09119417c519f" exitCode=0 Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718231 19803 generic.go:334] "Generic (PLEG): container finished" podID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerID="af3dea87089055ed8ff0a504beb660d463839e3f0a89b7384e4a83b81ca39cd2" exitCode=0 Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718239 19803 generic.go:334] "Generic (PLEG): container finished" podID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerID="bf44bf0654c243447f5c2eddd5cb8108dd3746163d5c74fb0917f512b255102e" exitCode=0 Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718250 19803 generic.go:334] "Generic (PLEG): container finished" podID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerID="aae2a34209a7f70578604cbdaf885049b779a8cdbb0f4b62cc513666e9bd8b15" exitCode=0 Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718258 19803 generic.go:334] "Generic (PLEG): container finished" podID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerID="87e7d839ee2a53e1c3f74a54b26e92cfb08db8934c88bf727c2b174e10eaeb14" exitCode=0 Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718305 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"3a0472a659129f987f1d91c84295078b1dadc74543f77b94adee51424a3773b8"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718334 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"c1f4c96a645b26f09b2c0582119a2127c438791eb500b75817b09119417c519f"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718346 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"af3dea87089055ed8ff0a504beb660d463839e3f0a89b7384e4a83b81ca39cd2"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718355 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"bf44bf0654c243447f5c2eddd5cb8108dd3746163d5c74fb0917f512b255102e"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718364 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"aae2a34209a7f70578604cbdaf885049b779a8cdbb0f4b62cc513666e9bd8b15"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718374 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"87e7d839ee2a53e1c3f74a54b26e92cfb08db8934c88bf727c2b174e10eaeb14"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718383 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"8b300a46-0e04-4109-a370-2589ce3efa0c","Type":"ContainerDied","Data":"bf411ce7f4ec34805ce73543df45f7c685ae3232f316f9344feaa3b9efe22bb7"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.718392 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bf411ce7f4ec34805ce73543df45f7c685ae3232f316f9344feaa3b9efe22bb7" Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.721767 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"ad71e4d6-32df-4ac5-acd2-e402cfef4611","Type":"ContainerStarted","Data":"0ce59926c04cbb5bf5147c89edac2d32d8a0313612394a159a81b854d56aecd7"} Mar 13 01:27:21.723554 master-0 kubenswrapper[19803]: I0313 01:27:21.721785 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"ad71e4d6-32df-4ac5-acd2-e402cfef4611","Type":"ContainerStarted","Data":"5aea1e02616917b8e583724b63836d5aed3ef0111dbb2d415bd3da0a6185b260"} Mar 13 01:27:21.754357 master-0 kubenswrapper[19803]: I0313 01:27:21.754270 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:27:21.761595 master-0 kubenswrapper[19803]: I0313 01:27:21.761478 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" podStartSLOduration=1.761445691 podStartE2EDuration="1.761445691s" podCreationTimestamp="2026-03-13 01:27:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:27:21.76020043 +0000 UTC m=+589.725348139" watchObservedRunningTime="2026-03-13 01:27:21.761445691 +0000 UTC m=+589.726593410" Mar 13 01:27:21.793041 master-0 kubenswrapper[19803]: I0313 01:27:21.792975 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-tls-assets\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793041 master-0 kubenswrapper[19803]: I0313 01:27:21.793025 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-main-db\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793282 master-0 kubenswrapper[19803]: I0313 01:27:21.793074 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-main-tls\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793282 master-0 kubenswrapper[19803]: I0313 01:27:21.793121 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-web\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793282 master-0 kubenswrapper[19803]: I0313 01:27:21.793178 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-metrics-client-ca\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793282 master-0 kubenswrapper[19803]: I0313 01:27:21.793226 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-metric\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793282 master-0 kubenswrapper[19803]: I0313 01:27:21.793254 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2zmgm\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-kube-api-access-2zmgm\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793500 master-0 kubenswrapper[19803]: I0313 01:27:21.793294 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793500 master-0 kubenswrapper[19803]: I0313 01:27:21.793376 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-config-volume\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793500 master-0 kubenswrapper[19803]: I0313 01:27:21.793398 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-config-out\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793500 master-0 kubenswrapper[19803]: I0313 01:27:21.793417 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.793500 master-0 kubenswrapper[19803]: I0313 01:27:21.793471 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-web-config\") pod \"8b300a46-0e04-4109-a370-2589ce3efa0c\" (UID: \"8b300a46-0e04-4109-a370-2589ce3efa0c\") " Mar 13 01:27:21.795116 master-0 kubenswrapper[19803]: I0313 01:27:21.793903 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-main-db" (OuterVolumeSpecName: "alertmanager-main-db") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "alertmanager-main-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:27:21.795116 master-0 kubenswrapper[19803]: I0313 01:27:21.794157 19803 reconciler_common.go:293] "Volume detached for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-main-db\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.795249 master-0 kubenswrapper[19803]: I0313 01:27:21.795182 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-metrics-client-ca" (OuterVolumeSpecName: "metrics-client-ca") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:21.796844 master-0 kubenswrapper[19803]: I0313 01:27:21.796762 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle" (OuterVolumeSpecName: "alertmanager-trusted-ca-bundle") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "alertmanager-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:21.798354 master-0 kubenswrapper[19803]: I0313 01:27:21.798241 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-main-tls" (OuterVolumeSpecName: "secret-alertmanager-main-tls") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "secret-alertmanager-main-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:21.799113 master-0 kubenswrapper[19803]: I0313 01:27:21.799029 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:21.809202 master-0 kubenswrapper[19803]: I0313 01:27:21.807913 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-config-out" (OuterVolumeSpecName: "config-out") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:27:21.812528 master-0 kubenswrapper[19803]: I0313 01:27:21.811831 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-web") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:21.812528 master-0 kubenswrapper[19803]: I0313 01:27:21.812342 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-metric" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy-metric") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy-metric". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:21.823296 master-0 kubenswrapper[19803]: I0313 01:27:21.820011 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-kube-api-access-2zmgm" (OuterVolumeSpecName: "kube-api-access-2zmgm") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "kube-api-access-2zmgm". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:21.823296 master-0 kubenswrapper[19803]: I0313 01:27:21.823164 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy" (OuterVolumeSpecName: "secret-alertmanager-kube-rbac-proxy") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "secret-alertmanager-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:21.826926 master-0 kubenswrapper[19803]: I0313 01:27:21.826864 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-config-volume" (OuterVolumeSpecName: "config-volume") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:21.888821 master-0 kubenswrapper[19803]: I0313 01:27:21.888761 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-web-config" (OuterVolumeSpecName: "web-config") pod "8b300a46-0e04-4109-a370-2589ce3efa0c" (UID: "8b300a46-0e04-4109-a370-2589ce3efa0c"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:21.895789 master-0 kubenswrapper[19803]: I0313 01:27:21.895721 19803 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-web-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895869 master-0 kubenswrapper[19803]: I0313 01:27:21.895788 19803 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895869 master-0 kubenswrapper[19803]: I0313 01:27:21.895814 19803 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-main-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895869 master-0 kubenswrapper[19803]: I0313 01:27:21.895842 19803 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895965 master-0 kubenswrapper[19803]: I0313 01:27:21.895868 19803 reconciler_common.go:293] "Volume detached for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895965 master-0 kubenswrapper[19803]: I0313 01:27:21.895893 19803 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy-metric\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895965 master-0 kubenswrapper[19803]: I0313 01:27:21.895914 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2zmgm\" (UniqueName: \"kubernetes.io/projected/8b300a46-0e04-4109-a370-2589ce3efa0c-kube-api-access-2zmgm\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895965 master-0 kubenswrapper[19803]: I0313 01:27:21.895937 19803 reconciler_common.go:293] "Volume detached for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-secret-alertmanager-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.895965 master-0 kubenswrapper[19803]: I0313 01:27:21.895956 19803 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/8b300a46-0e04-4109-a370-2589ce3efa0c-config-volume\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.896113 master-0 kubenswrapper[19803]: I0313 01:27:21.895975 19803 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/8b300a46-0e04-4109-a370-2589ce3efa0c-config-out\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:21.896113 master-0 kubenswrapper[19803]: I0313 01:27:21.895994 19803 reconciler_common.go:293] "Volume detached for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8b300a46-0e04-4109-a370-2589ce3efa0c-alertmanager-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:22.729589 master-0 kubenswrapper[19803]: I0313 01:27:22.729483 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:27:22.769907 master-0 kubenswrapper[19803]: I0313 01:27:22.769799 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 01:27:22.776258 master-0 kubenswrapper[19803]: I0313 01:27:22.776189 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 01:27:24.342314 master-0 kubenswrapper[19803]: I0313 01:27:24.342166 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" path="/var/lib/kubelet/pods/8b300a46-0e04-4109-a370-2589ce3efa0c/volumes" Mar 13 01:27:25.573308 master-0 kubenswrapper[19803]: I0313 01:27:25.573084 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 01:27:25.574426 master-0 kubenswrapper[19803]: I0313 01:27:25.573823 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="prometheus" containerID="cri-o://ec7d02065e35d431513db822623266aaf8c95259a309a1005f171fe05f0e637e" gracePeriod=600 Mar 13 01:27:25.574426 master-0 kubenswrapper[19803]: I0313 01:27:25.573888 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy" containerID="cri-o://85197c0ec36988e7a55e25e57f92181f596578fbe76bd9378133dae8329fd79d" gracePeriod=600 Mar 13 01:27:25.574426 master-0 kubenswrapper[19803]: I0313 01:27:25.574006 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-web" containerID="cri-o://6d6124c93746c8e1229ad1de12d71fb75ec7fe2a0b05142a6d2ec0842ec2a4e8" gracePeriod=600 Mar 13 01:27:25.574426 master-0 kubenswrapper[19803]: I0313 01:27:25.574049 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="config-reloader" containerID="cri-o://2c37e2d8ff924d4aa20df6157bfd1b832becd3b10c0519ea45391c27ebf82fc4" gracePeriod=600 Mar 13 01:27:25.574426 master-0 kubenswrapper[19803]: I0313 01:27:25.574007 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="thanos-sidecar" containerID="cri-o://0026657e78430f9b6b6d2a35bd7651b500cfaf15f0b053eff7c7f69c0cbf7516" gracePeriod=600 Mar 13 01:27:25.574426 master-0 kubenswrapper[19803]: I0313 01:27:25.574049 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-monitoring/prometheus-k8s-0" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-thanos" containerID="cri-o://0a088a6c8e4a4ccc35a614fd9f7ebc52c2972439147464ce8f71cfb707b2d4df" gracePeriod=600 Mar 13 01:27:25.774666 master-0 kubenswrapper[19803]: I0313 01:27:25.774576 19803 generic.go:334] "Generic (PLEG): container finished" podID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerID="0a088a6c8e4a4ccc35a614fd9f7ebc52c2972439147464ce8f71cfb707b2d4df" exitCode=0 Mar 13 01:27:25.774666 master-0 kubenswrapper[19803]: I0313 01:27:25.774644 19803 generic.go:334] "Generic (PLEG): container finished" podID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerID="85197c0ec36988e7a55e25e57f92181f596578fbe76bd9378133dae8329fd79d" exitCode=0 Mar 13 01:27:25.774666 master-0 kubenswrapper[19803]: I0313 01:27:25.774654 19803 generic.go:334] "Generic (PLEG): container finished" podID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerID="6d6124c93746c8e1229ad1de12d71fb75ec7fe2a0b05142a6d2ec0842ec2a4e8" exitCode=0 Mar 13 01:27:25.774666 master-0 kubenswrapper[19803]: I0313 01:27:25.774663 19803 generic.go:334] "Generic (PLEG): container finished" podID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerID="0026657e78430f9b6b6d2a35bd7651b500cfaf15f0b053eff7c7f69c0cbf7516" exitCode=0 Mar 13 01:27:25.774666 master-0 kubenswrapper[19803]: I0313 01:27:25.774670 19803 generic.go:334] "Generic (PLEG): container finished" podID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerID="2c37e2d8ff924d4aa20df6157bfd1b832becd3b10c0519ea45391c27ebf82fc4" exitCode=0 Mar 13 01:27:25.774666 master-0 kubenswrapper[19803]: I0313 01:27:25.774677 19803 generic.go:334] "Generic (PLEG): container finished" podID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerID="ec7d02065e35d431513db822623266aaf8c95259a309a1005f171fe05f0e637e" exitCode=0 Mar 13 01:27:25.776220 master-0 kubenswrapper[19803]: I0313 01:27:25.774744 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"0a088a6c8e4a4ccc35a614fd9f7ebc52c2972439147464ce8f71cfb707b2d4df"} Mar 13 01:27:25.776220 master-0 kubenswrapper[19803]: I0313 01:27:25.774804 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"85197c0ec36988e7a55e25e57f92181f596578fbe76bd9378133dae8329fd79d"} Mar 13 01:27:25.776220 master-0 kubenswrapper[19803]: I0313 01:27:25.774816 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"6d6124c93746c8e1229ad1de12d71fb75ec7fe2a0b05142a6d2ec0842ec2a4e8"} Mar 13 01:27:25.776220 master-0 kubenswrapper[19803]: I0313 01:27:25.774826 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"0026657e78430f9b6b6d2a35bd7651b500cfaf15f0b053eff7c7f69c0cbf7516"} Mar 13 01:27:25.776220 master-0 kubenswrapper[19803]: I0313 01:27:25.774835 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"2c37e2d8ff924d4aa20df6157bfd1b832becd3b10c0519ea45391c27ebf82fc4"} Mar 13 01:27:25.776220 master-0 kubenswrapper[19803]: I0313 01:27:25.774844 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"ec7d02065e35d431513db822623266aaf8c95259a309a1005f171fe05f0e637e"} Mar 13 01:27:25.776821 master-0 kubenswrapper[19803]: I0313 01:27:25.776796 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/3.log" Mar 13 01:27:25.778358 master-0 kubenswrapper[19803]: I0313 01:27:25.778299 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager-cert-syncer/0.log" Mar 13 01:27:25.779106 master-0 kubenswrapper[19803]: I0313 01:27:25.779073 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:27:25.779184 master-0 kubenswrapper[19803]: I0313 01:27:25.779109 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" exitCode=1 Mar 13 01:27:25.779658 master-0 kubenswrapper[19803]: I0313 01:27:25.779182 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerDied","Data":"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b"} Mar 13 01:27:25.780146 master-0 kubenswrapper[19803]: I0313 01:27:25.780112 19803 scope.go:117] "RemoveContainer" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" Mar 13 01:27:25.784679 master-0 kubenswrapper[19803]: I0313 01:27:25.784633 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 13 01:27:25.785222 master-0 kubenswrapper[19803]: I0313 01:27:25.785193 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 13 01:27:25.785647 master-0 kubenswrapper[19803]: I0313 01:27:25.785605 19803 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="27da9b144d4a4f750c33de749ba64c7d7c2d328ab7a8dc23bb642f52fbaf1fd7" exitCode=1 Mar 13 01:27:25.785647 master-0 kubenswrapper[19803]: I0313 01:27:25.785644 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerDied","Data":"27da9b144d4a4f750c33de749ba64c7d7c2d328ab7a8dc23bb642f52fbaf1fd7"} Mar 13 01:27:25.786117 master-0 kubenswrapper[19803]: I0313 01:27:25.786079 19803 scope.go:117] "RemoveContainer" containerID="27da9b144d4a4f750c33de749ba64c7d7c2d328ab7a8dc23bb642f52fbaf1fd7" Mar 13 01:27:26.170631 master-0 kubenswrapper[19803]: I0313 01:27:26.170570 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:27:26.311492 master-0 kubenswrapper[19803]: I0313 01:27:26.311429 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-kubelet-serving-ca-bundle\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311492 master-0 kubenswrapper[19803]: I0313 01:27:26.311491 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-serving-certs-ca-bundle\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311549 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-web-config\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311579 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config-out\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311613 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311663 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-thanos-prometheus-http-client-file\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311693 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-db\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311711 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gws49\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-kube-api-access-gws49\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311766 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-grpc-tls\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311784 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-tls\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.311829 master-0 kubenswrapper[19803]: I0313 01:27:26.311816 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-metrics-client-ca\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312184 master-0 kubenswrapper[19803]: I0313 01:27:26.311845 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-tls-assets\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312184 master-0 kubenswrapper[19803]: I0313 01:27:26.311868 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-metrics-client-certs\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312184 master-0 kubenswrapper[19803]: I0313 01:27:26.311890 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-rulefiles-0\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312184 master-0 kubenswrapper[19803]: I0313 01:27:26.311926 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312184 master-0 kubenswrapper[19803]: I0313 01:27:26.311951 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312184 master-0 kubenswrapper[19803]: I0313 01:27:26.311979 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-kube-rbac-proxy\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312184 master-0 kubenswrapper[19803]: I0313 01:27:26.312008 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") pod \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\" (UID: \"80dda8c5-33c6-46ba-b4fa-8e4877de9187\") " Mar 13 01:27:26.312806 master-0 kubenswrapper[19803]: I0313 01:27:26.312768 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle" (OuterVolumeSpecName: "prometheus-trusted-ca-bundle") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "prometheus-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:26.313937 master-0 kubenswrapper[19803]: I0313 01:27:26.313172 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-metrics-client-ca" (OuterVolumeSpecName: "configmap-metrics-client-ca") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "configmap-metrics-client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:26.314017 master-0 kubenswrapper[19803]: I0313 01:27:26.313953 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-serving-certs-ca-bundle" (OuterVolumeSpecName: "configmap-serving-certs-ca-bundle") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "configmap-serving-certs-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:26.315668 master-0 kubenswrapper[19803]: I0313 01:27:26.314493 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-kubelet-serving-ca-bundle" (OuterVolumeSpecName: "configmap-kubelet-serving-ca-bundle") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "configmap-kubelet-serving-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:26.316267 master-0 kubenswrapper[19803]: I0313 01:27:26.316227 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-kube-rbac-proxy-web" (OuterVolumeSpecName: "secret-prometheus-k8s-kube-rbac-proxy-web") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "secret-prometheus-k8s-kube-rbac-proxy-web". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.316854 master-0 kubenswrapper[19803]: I0313 01:27:26.316814 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-db" (OuterVolumeSpecName: "prometheus-k8s-db") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "prometheus-k8s-db". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:27:26.318067 master-0 kubenswrapper[19803]: I0313 01:27:26.317963 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-tls-assets" (OuterVolumeSpecName: "tls-assets") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "tls-assets". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:26.319919 master-0 kubenswrapper[19803]: I0313 01:27:26.319743 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-rulefiles-0" (OuterVolumeSpecName: "prometheus-k8s-rulefiles-0") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "prometheus-k8s-rulefiles-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:26.320745 master-0 kubenswrapper[19803]: I0313 01:27:26.320447 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-grpc-tls" (OuterVolumeSpecName: "secret-grpc-tls") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "secret-grpc-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.320745 master-0 kubenswrapper[19803]: I0313 01:27:26.320690 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-kube-rbac-proxy" (OuterVolumeSpecName: "secret-kube-rbac-proxy") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "secret-kube-rbac-proxy". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.324977 master-0 kubenswrapper[19803]: I0313 01:27:26.324928 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config-out" (OuterVolumeSpecName: "config-out") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "config-out". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:27:26.326413 master-0 kubenswrapper[19803]: I0313 01:27:26.326372 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config" (OuterVolumeSpecName: "config") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.326508 master-0 kubenswrapper[19803]: I0313 01:27:26.326430 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-thanos-sidecar-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-thanos-sidecar-tls") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "secret-prometheus-k8s-thanos-sidecar-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.327338 master-0 kubenswrapper[19803]: I0313 01:27:26.327022 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-tls" (OuterVolumeSpecName: "secret-prometheus-k8s-tls") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "secret-prometheus-k8s-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.327338 master-0 kubenswrapper[19803]: I0313 01:27:26.327031 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-kube-api-access-gws49" (OuterVolumeSpecName: "kube-api-access-gws49") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "kube-api-access-gws49". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:26.327681 master-0 kubenswrapper[19803]: I0313 01:27:26.327630 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-thanos-prometheus-http-client-file" (OuterVolumeSpecName: "thanos-prometheus-http-client-file") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "thanos-prometheus-http-client-file". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.327755 master-0 kubenswrapper[19803]: I0313 01:27:26.327675 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-metrics-client-certs" (OuterVolumeSpecName: "secret-metrics-client-certs") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "secret-metrics-client-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.385868 master-0 kubenswrapper[19803]: I0313 01:27:26.385796 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-web-config" (OuterVolumeSpecName: "web-config") pod "80dda8c5-33c6-46ba-b4fa-8e4877de9187" (UID: "80dda8c5-33c6-46ba-b4fa-8e4877de9187"). InnerVolumeSpecName "web-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414021 19803 reconciler_common.go:293] "Volume detached for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-kubelet-serving-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414074 19803 reconciler_common.go:293] "Volume detached for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-serving-certs-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414087 19803 reconciler_common.go:293] "Volume detached for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-web-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414097 19803 reconciler_common.go:293] "Volume detached for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config-out\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414108 19803 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-thanos-sidecar-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414119 19803 reconciler_common.go:293] "Volume detached for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-thanos-prometheus-http-client-file\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414132 19803 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-db\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414145 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gws49\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-kube-api-access-gws49\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414153 19803 reconciler_common.go:293] "Volume detached for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-grpc-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414161 19803 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-tls\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414171 19803 reconciler_common.go:293] "Volume detached for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-configmap-metrics-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414180 19803 reconciler_common.go:293] "Volume detached for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/80dda8c5-33c6-46ba-b4fa-8e4877de9187-tls-assets\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414189 19803 reconciler_common.go:293] "Volume detached for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-metrics-client-certs\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414198 19803 reconciler_common.go:293] "Volume detached for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-k8s-rulefiles-0\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414207 19803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414219 19803 reconciler_common.go:293] "Volume detached for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-prometheus-k8s-kube-rbac-proxy-web\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414229 19803 reconciler_common.go:293] "Volume detached for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/80dda8c5-33c6-46ba-b4fa-8e4877de9187-secret-kube-rbac-proxy\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.414428 master-0 kubenswrapper[19803]: I0313 01:27:26.414238 19803 reconciler_common.go:293] "Volume detached for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/80dda8c5-33c6-46ba-b4fa-8e4877de9187-prometheus-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:26.799979 master-0 kubenswrapper[19803]: I0313 01:27:26.799896 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/3.log" Mar 13 01:27:26.801899 master-0 kubenswrapper[19803]: I0313 01:27:26.801832 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager-cert-syncer/0.log" Mar 13 01:27:26.802925 master-0 kubenswrapper[19803]: I0313 01:27:26.802861 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:27:26.803185 master-0 kubenswrapper[19803]: I0313 01:27:26.803045 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"24e04786030519cf5fd9f600ea6710e9","Type":"ContainerStarted","Data":"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4"} Mar 13 01:27:26.807489 master-0 kubenswrapper[19803]: I0313 01:27:26.807452 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 13 01:27:26.808199 master-0 kubenswrapper[19803]: I0313 01:27:26.808161 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 13 01:27:26.809140 master-0 kubenswrapper[19803]: I0313 01:27:26.809090 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1d3d45b6ce1b3764f9927e623a71adf8","Type":"ContainerStarted","Data":"b695d42371df758d1a7c1ba4450073ea3c8b6d48c4320403e34e1092182489bd"} Mar 13 01:27:26.815404 master-0 kubenswrapper[19803]: I0313 01:27:26.815356 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"80dda8c5-33c6-46ba-b4fa-8e4877de9187","Type":"ContainerDied","Data":"00f8919fb307757ca915c63b29d25cbc015314bd7fd310f0d1b2c388c59e3462"} Mar 13 01:27:26.815574 master-0 kubenswrapper[19803]: I0313 01:27:26.815430 19803 scope.go:117] "RemoveContainer" containerID="0a088a6c8e4a4ccc35a614fd9f7ebc52c2972439147464ce8f71cfb707b2d4df" Mar 13 01:27:26.815693 master-0 kubenswrapper[19803]: I0313 01:27:26.815664 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:27:26.857709 master-0 kubenswrapper[19803]: I0313 01:27:26.857521 19803 scope.go:117] "RemoveContainer" containerID="85197c0ec36988e7a55e25e57f92181f596578fbe76bd9378133dae8329fd79d" Mar 13 01:27:26.908563 master-0 kubenswrapper[19803]: I0313 01:27:26.908408 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 01:27:26.914608 master-0 kubenswrapper[19803]: I0313 01:27:26.914190 19803 scope.go:117] "RemoveContainer" containerID="6d6124c93746c8e1229ad1de12d71fb75ec7fe2a0b05142a6d2ec0842ec2a4e8" Mar 13 01:27:26.922424 master-0 kubenswrapper[19803]: I0313 01:27:26.922372 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 01:27:26.936390 master-0 kubenswrapper[19803]: I0313 01:27:26.936362 19803 scope.go:117] "RemoveContainer" containerID="0026657e78430f9b6b6d2a35bd7651b500cfaf15f0b053eff7c7f69c0cbf7516" Mar 13 01:27:26.956309 master-0 kubenswrapper[19803]: I0313 01:27:26.956265 19803 scope.go:117] "RemoveContainer" containerID="2c37e2d8ff924d4aa20df6157bfd1b832becd3b10c0519ea45391c27ebf82fc4" Mar 13 01:27:26.978052 master-0 kubenswrapper[19803]: I0313 01:27:26.977988 19803 scope.go:117] "RemoveContainer" containerID="ec7d02065e35d431513db822623266aaf8c95259a309a1005f171fe05f0e637e" Mar 13 01:27:26.998140 master-0 kubenswrapper[19803]: I0313 01:27:26.998061 19803 scope.go:117] "RemoveContainer" containerID="e46867efd75c4c36750d2b1cae22396c86c896f1e63811cbdf54fe789741c60d" Mar 13 01:27:28.315713 master-0 kubenswrapper[19803]: I0313 01:27:28.315641 19803 scope.go:117] "RemoveContainer" containerID="eb163595f657d3d465404b9e814f2c9a64ffb4106e2b87f575cc373770465bcf" Mar 13 01:27:28.342535 master-0 kubenswrapper[19803]: I0313 01:27:28.338622 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" path="/var/lib/kubelet/pods/80dda8c5-33c6-46ba-b4fa-8e4877de9187/volumes" Mar 13 01:27:28.861280 master-0 kubenswrapper[19803]: I0313 01:27:28.861212 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/3.log" Mar 13 01:27:28.862359 master-0 kubenswrapper[19803]: I0313 01:27:28.862295 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" event={"ID":"6fd82994-f4d4-49e9-8742-07e206322e76","Type":"ContainerStarted","Data":"29a445978f48e5077ef7e977b1e34520f41e8e08e690a7af2bb18dba964c9e0e"} Mar 13 01:27:28.862770 master-0 kubenswrapper[19803]: I0313 01:27:28.862725 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:27:29.318009 master-0 kubenswrapper[19803]: I0313 01:27:29.317942 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 01:27:29.318577 master-0 kubenswrapper[19803]: I0313 01:27:29.318236 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/installer-3-master-0" podUID="4812756b-4eb5-45bf-beb3-f78be74eaec4" containerName="installer" containerID="cri-o://f175308dc25e5a25aa95d958e990b4b066e7d36b4a48358eb4045f787948e696" gracePeriod=30 Mar 13 01:27:31.326015 master-0 kubenswrapper[19803]: I0313 01:27:31.325919 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 01:27:31.326819 master-0 kubenswrapper[19803]: E0313 01:27:31.326771 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="prometheus" Mar 13 01:27:31.326819 master-0 kubenswrapper[19803]: I0313 01:27:31.326809 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="prometheus" Mar 13 01:27:31.326998 master-0 kubenswrapper[19803]: E0313 01:27:31.326890 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy" Mar 13 01:27:31.327062 master-0 kubenswrapper[19803]: I0313 01:27:31.327005 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy" Mar 13 01:27:31.327112 master-0 kubenswrapper[19803]: E0313 01:27:31.327057 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="init-config-reloader" Mar 13 01:27:31.327112 master-0 kubenswrapper[19803]: I0313 01:27:31.327073 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="init-config-reloader" Mar 13 01:27:31.327112 master-0 kubenswrapper[19803]: E0313 01:27:31.327103 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-web" Mar 13 01:27:31.327238 master-0 kubenswrapper[19803]: I0313 01:27:31.327121 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-web" Mar 13 01:27:31.327238 master-0 kubenswrapper[19803]: E0313 01:27:31.327153 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-thanos" Mar 13 01:27:31.327238 master-0 kubenswrapper[19803]: I0313 01:27:31.327168 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-thanos" Mar 13 01:27:31.327238 master-0 kubenswrapper[19803]: E0313 01:27:31.327189 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="config-reloader" Mar 13 01:27:31.327238 master-0 kubenswrapper[19803]: I0313 01:27:31.327205 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="config-reloader" Mar 13 01:27:31.327238 master-0 kubenswrapper[19803]: E0313 01:27:31.327232 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="prom-label-proxy" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: I0313 01:27:31.327249 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="prom-label-proxy" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: E0313 01:27:31.327278 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="config-reloader" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: I0313 01:27:31.327295 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="config-reloader" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: E0313 01:27:31.327313 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: I0313 01:27:31.327328 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: E0313 01:27:31.327354 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-web" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: I0313 01:27:31.327370 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-web" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: E0313 01:27:31.327396 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="alertmanager" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: I0313 01:27:31.327414 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="alertmanager" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: E0313 01:27:31.327439 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="init-config-reloader" Mar 13 01:27:31.327461 master-0 kubenswrapper[19803]: I0313 01:27:31.327456 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="init-config-reloader" Mar 13 01:27:31.327901 master-0 kubenswrapper[19803]: E0313 01:27:31.327505 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="thanos-sidecar" Mar 13 01:27:31.327901 master-0 kubenswrapper[19803]: I0313 01:27:31.327556 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="thanos-sidecar" Mar 13 01:27:31.327901 master-0 kubenswrapper[19803]: E0313 01:27:31.327579 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-metric" Mar 13 01:27:31.327901 master-0 kubenswrapper[19803]: I0313 01:27:31.327597 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-metric" Mar 13 01:27:31.328042 master-0 kubenswrapper[19803]: I0313 01:27:31.327990 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="thanos-sidecar" Mar 13 01:27:31.328087 master-0 kubenswrapper[19803]: I0313 01:27:31.328059 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-web" Mar 13 01:27:31.328130 master-0 kubenswrapper[19803]: I0313 01:27:31.328088 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="prom-label-proxy" Mar 13 01:27:31.328130 master-0 kubenswrapper[19803]: I0313 01:27:31.328106 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="alertmanager" Mar 13 01:27:31.328206 master-0 kubenswrapper[19803]: I0313 01:27:31.328128 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-metric" Mar 13 01:27:31.328206 master-0 kubenswrapper[19803]: I0313 01:27:31.328149 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="config-reloader" Mar 13 01:27:31.328206 master-0 kubenswrapper[19803]: I0313 01:27:31.328172 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy" Mar 13 01:27:31.328313 master-0 kubenswrapper[19803]: I0313 01:27:31.328203 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="prometheus" Mar 13 01:27:31.328313 master-0 kubenswrapper[19803]: I0313 01:27:31.328238 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="config-reloader" Mar 13 01:27:31.328313 master-0 kubenswrapper[19803]: I0313 01:27:31.328262 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy-web" Mar 13 01:27:31.328313 master-0 kubenswrapper[19803]: I0313 01:27:31.328287 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b300a46-0e04-4109-a370-2589ce3efa0c" containerName="kube-rbac-proxy" Mar 13 01:27:31.328473 master-0 kubenswrapper[19803]: I0313 01:27:31.328316 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="80dda8c5-33c6-46ba-b4fa-8e4877de9187" containerName="kube-rbac-proxy-thanos" Mar 13 01:27:31.329233 master-0 kubenswrapper[19803]: I0313 01:27:31.329194 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.346862 master-0 kubenswrapper[19803]: I0313 01:27:31.346802 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 01:27:31.411591 master-0 kubenswrapper[19803]: I0313 01:27:31.411525 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-var-lock\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.412050 master-0 kubenswrapper[19803]: I0313 01:27:31.412028 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/943a993e-2a88-4bda-832f-d03e9d2d08d8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.412181 master-0 kubenswrapper[19803]: I0313 01:27:31.412162 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.514009 master-0 kubenswrapper[19803]: I0313 01:27:31.513947 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-var-lock\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.514359 master-0 kubenswrapper[19803]: I0313 01:27:31.514341 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/943a993e-2a88-4bda-832f-d03e9d2d08d8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.514468 master-0 kubenswrapper[19803]: I0313 01:27:31.514156 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-var-lock\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.514468 master-0 kubenswrapper[19803]: I0313 01:27:31.514434 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.514651 master-0 kubenswrapper[19803]: I0313 01:27:31.514632 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-kubelet-dir\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.543454 master-0 kubenswrapper[19803]: I0313 01:27:31.543381 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/943a993e-2a88-4bda-832f-d03e9d2d08d8-kube-api-access\") pod \"installer-4-master-0\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:31.676806 master-0 kubenswrapper[19803]: I0313 01:27:31.676648 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:27:32.233726 master-0 kubenswrapper[19803]: I0313 01:27:32.233625 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/installer-4-master-0"] Mar 13 01:27:32.703868 master-0 kubenswrapper[19803]: I0313 01:27:32.703769 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-64488f9d78-trr9r" Mar 13 01:27:32.904200 master-0 kubenswrapper[19803]: I0313 01:27:32.904095 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"943a993e-2a88-4bda-832f-d03e9d2d08d8","Type":"ContainerStarted","Data":"0c57179f9f1188dc628485baf6939e006097c3e65ac118069477873d8409a413"} Mar 13 01:27:32.904566 master-0 kubenswrapper[19803]: I0313 01:27:32.904222 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"943a993e-2a88-4bda-832f-d03e9d2d08d8","Type":"ContainerStarted","Data":"5847f166089a0b85efb90458337282fc04c8e6c7930d55283bcc324d11078c37"} Mar 13 01:27:51.114448 master-0 kubenswrapper[19803]: I0313 01:27:51.114249 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_4812756b-4eb5-45bf-beb3-f78be74eaec4/installer/0.log" Mar 13 01:27:51.115735 master-0 kubenswrapper[19803]: I0313 01:27:51.114463 19803 generic.go:334] "Generic (PLEG): container finished" podID="4812756b-4eb5-45bf-beb3-f78be74eaec4" containerID="f175308dc25e5a25aa95d958e990b4b066e7d36b4a48358eb4045f787948e696" exitCode=1 Mar 13 01:27:51.115735 master-0 kubenswrapper[19803]: I0313 01:27:51.114578 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4812756b-4eb5-45bf-beb3-f78be74eaec4","Type":"ContainerDied","Data":"f175308dc25e5a25aa95d958e990b4b066e7d36b4a48358eb4045f787948e696"} Mar 13 01:27:51.184323 master-0 kubenswrapper[19803]: E0313 01:27:51.183261 19803 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-pod4812756b_4eb5_45bf_beb3_f78be74eaec4.slice/crio-conmon-f175308dc25e5a25aa95d958e990b4b066e7d36b4a48358eb4045f787948e696.scope\": RecentStats: unable to find data in memory cache]" Mar 13 01:27:51.543961 master-0 kubenswrapper[19803]: I0313 01:27:51.543884 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_4812756b-4eb5-45bf-beb3-f78be74eaec4/installer/0.log" Mar 13 01:27:51.544306 master-0 kubenswrapper[19803]: I0313 01:27:51.544006 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:51.571844 master-0 kubenswrapper[19803]: I0313 01:27:51.571533 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/installer-4-master-0" podStartSLOduration=20.57147348 podStartE2EDuration="20.57147348s" podCreationTimestamp="2026-03-13 01:27:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:27:32.938261343 +0000 UTC m=+600.903409062" watchObservedRunningTime="2026-03-13 01:27:51.57147348 +0000 UTC m=+619.536621209" Mar 13 01:27:51.721328 master-0 kubenswrapper[19803]: I0313 01:27:51.721257 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-var-lock\") pod \"4812756b-4eb5-45bf-beb3-f78be74eaec4\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " Mar 13 01:27:51.721631 master-0 kubenswrapper[19803]: I0313 01:27:51.721459 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4812756b-4eb5-45bf-beb3-f78be74eaec4-kube-api-access\") pod \"4812756b-4eb5-45bf-beb3-f78be74eaec4\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " Mar 13 01:27:51.721631 master-0 kubenswrapper[19803]: I0313 01:27:51.721554 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-kubelet-dir\") pod \"4812756b-4eb5-45bf-beb3-f78be74eaec4\" (UID: \"4812756b-4eb5-45bf-beb3-f78be74eaec4\") " Mar 13 01:27:51.721631 master-0 kubenswrapper[19803]: I0313 01:27:51.721563 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-var-lock" (OuterVolumeSpecName: "var-lock") pod "4812756b-4eb5-45bf-beb3-f78be74eaec4" (UID: "4812756b-4eb5-45bf-beb3-f78be74eaec4"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:27:51.721875 master-0 kubenswrapper[19803]: I0313 01:27:51.721796 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4812756b-4eb5-45bf-beb3-f78be74eaec4" (UID: "4812756b-4eb5-45bf-beb3-f78be74eaec4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:27:51.722540 master-0 kubenswrapper[19803]: I0313 01:27:51.722481 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:51.722601 master-0 kubenswrapper[19803]: I0313 01:27:51.722555 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4812756b-4eb5-45bf-beb3-f78be74eaec4-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:51.726629 master-0 kubenswrapper[19803]: I0313 01:27:51.726482 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4812756b-4eb5-45bf-beb3-f78be74eaec4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4812756b-4eb5-45bf-beb3-f78be74eaec4" (UID: "4812756b-4eb5-45bf-beb3-f78be74eaec4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:51.824866 master-0 kubenswrapper[19803]: I0313 01:27:51.824738 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4812756b-4eb5-45bf-beb3-f78be74eaec4-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:52.127191 master-0 kubenswrapper[19803]: I0313 01:27:52.127113 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-3-master-0_4812756b-4eb5-45bf-beb3-f78be74eaec4/installer/0.log" Mar 13 01:27:52.128238 master-0 kubenswrapper[19803]: I0313 01:27:52.127211 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-3-master-0" event={"ID":"4812756b-4eb5-45bf-beb3-f78be74eaec4","Type":"ContainerDied","Data":"76ba75bd024a1f0ecddd642b86f8a1972aa33643735dc325cf7e846630fdb314"} Mar 13 01:27:52.128238 master-0 kubenswrapper[19803]: I0313 01:27:52.127281 19803 scope.go:117] "RemoveContainer" containerID="f175308dc25e5a25aa95d958e990b4b066e7d36b4a48358eb4045f787948e696" Mar 13 01:27:52.128238 master-0 kubenswrapper[19803]: I0313 01:27:52.127291 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-3-master-0" Mar 13 01:27:52.187816 master-0 kubenswrapper[19803]: I0313 01:27:52.186633 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 01:27:52.194562 master-0 kubenswrapper[19803]: I0313 01:27:52.192716 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/installer-3-master-0"] Mar 13 01:27:52.338115 master-0 kubenswrapper[19803]: I0313 01:27:52.338006 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4812756b-4eb5-45bf-beb3-f78be74eaec4" path="/var/lib/kubelet/pods/4812756b-4eb5-45bf-beb3-f78be74eaec4/volumes" Mar 13 01:27:52.917431 master-0 kubenswrapper[19803]: I0313 01:27:52.917333 19803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 01:27:52.917911 master-0 kubenswrapper[19803]: I0313 01:27:52.917818 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" containerID="cri-o://44c7d80aa4aadd7ed9cfa67d8c3f0e0defda54140db09140424d6dcf8461fe9e" gracePeriod=30 Mar 13 01:27:52.918092 master-0 kubenswrapper[19803]: I0313 01:27:52.918043 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" containerID="cri-o://b695d42371df758d1a7c1ba4450073ea3c8b6d48c4320403e34e1092182489bd" gracePeriod=30 Mar 13 01:27:52.918188 master-0 kubenswrapper[19803]: I0313 01:27:52.918145 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" containerID="cri-o://8758f285d02298f3f87cf8a95d69a9b9fc7adb315bfb680293d79f27940394d1" gracePeriod=30 Mar 13 01:27:52.919178 master-0 kubenswrapper[19803]: I0313 01:27:52.919125 19803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 01:27:52.919642 master-0 kubenswrapper[19803]: E0313 01:27:52.919597 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="wait-for-host-port" Mar 13 01:27:52.919642 master-0 kubenswrapper[19803]: I0313 01:27:52.919633 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="wait-for-host-port" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: E0313 01:27:52.919671 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4812756b-4eb5-45bf-beb3-f78be74eaec4" containerName="installer" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: I0313 01:27:52.919687 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="4812756b-4eb5-45bf-beb3-f78be74eaec4" containerName="installer" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: E0313 01:27:52.919723 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: I0313 01:27:52.919735 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: E0313 01:27:52.919763 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: I0313 01:27:52.919775 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: E0313 01:27:52.919796 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 13 01:27:52.919810 master-0 kubenswrapper[19803]: I0313 01:27:52.919808 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 13 01:27:52.920361 master-0 kubenswrapper[19803]: E0313 01:27:52.919835 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 01:27:52.920361 master-0 kubenswrapper[19803]: I0313 01:27:52.919848 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 01:27:52.920361 master-0 kubenswrapper[19803]: I0313 01:27:52.920071 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 01:27:52.920361 master-0 kubenswrapper[19803]: I0313 01:27:52.920094 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 01:27:52.920361 master-0 kubenswrapper[19803]: I0313 01:27:52.920125 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="4812756b-4eb5-45bf-beb3-f78be74eaec4" containerName="installer" Mar 13 01:27:52.920361 master-0 kubenswrapper[19803]: I0313 01:27:52.920156 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler" Mar 13 01:27:52.920361 master-0 kubenswrapper[19803]: I0313 01:27:52.920175 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-recovery-controller" Mar 13 01:27:52.921010 master-0 kubenswrapper[19803]: E0313 01:27:52.920440 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 01:27:52.921010 master-0 kubenswrapper[19803]: I0313 01:27:52.920487 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 01:27:52.921486 master-0 kubenswrapper[19803]: I0313 01:27:52.921069 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="1d3d45b6ce1b3764f9927e623a71adf8" containerName="kube-scheduler-cert-syncer" Mar 13 01:27:52.947931 master-0 kubenswrapper[19803]: I0313 01:27:52.946923 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:27:52.947931 master-0 kubenswrapper[19803]: I0313 01:27:52.947080 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:27:52.947931 master-0 kubenswrapper[19803]: E0313 01:27:52.947397 19803 projected.go:288] Couldn't get configMap openshift-kube-apiserver/kube-root-ca.crt: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:27:52.947931 master-0 kubenswrapper[19803]: E0313 01:27:52.947431 19803 projected.go:194] Error preparing data for projected volume kube-api-access for pod openshift-kube-apiserver/installer-1-master-0: object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:27:52.947931 master-0 kubenswrapper[19803]: E0313 01:27:52.947506 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access podName:fdcd8438-d33f-490f-a841-8944c58506f8 nodeName:}" failed. No retries permitted until 2026-03-13 01:29:54.947477767 +0000 UTC m=+742.912625486 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "kube-api-access" (UniqueName: "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access") pod "installer-1-master-0" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8") : object "openshift-kube-apiserver"/"kube-root-ca.crt" not registered Mar 13 01:27:52.956555 master-0 kubenswrapper[19803]: I0313 01:27:52.956362 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"installer-2-master-0\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " pod="openshift-kube-controller-manager/installer-2-master-0" Mar 13 01:27:53.049159 master-0 kubenswrapper[19803]: I0313 01:27:53.048759 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:53.049159 master-0 kubenswrapper[19803]: I0313 01:27:53.048936 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:53.145079 master-0 kubenswrapper[19803]: I0313 01:27:53.144750 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/1.log" Mar 13 01:27:53.146933 master-0 kubenswrapper[19803]: I0313 01:27:53.146886 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/0.log" Mar 13 01:27:53.147577 master-0 kubenswrapper[19803]: I0313 01:27:53.147540 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 13 01:27:53.148137 master-0 kubenswrapper[19803]: I0313 01:27:53.148087 19803 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="b695d42371df758d1a7c1ba4450073ea3c8b6d48c4320403e34e1092182489bd" exitCode=2 Mar 13 01:27:53.148137 master-0 kubenswrapper[19803]: I0313 01:27:53.148120 19803 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="8758f285d02298f3f87cf8a95d69a9b9fc7adb315bfb680293d79f27940394d1" exitCode=0 Mar 13 01:27:53.148137 master-0 kubenswrapper[19803]: I0313 01:27:53.148129 19803 generic.go:334] "Generic (PLEG): container finished" podID="1d3d45b6ce1b3764f9927e623a71adf8" containerID="44c7d80aa4aadd7ed9cfa67d8c3f0e0defda54140db09140424d6dcf8461fe9e" exitCode=0 Mar 13 01:27:53.148411 master-0 kubenswrapper[19803]: I0313 01:27:53.148168 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1893a5398893367fa6dfc57f35d1608dbd0ecd13591ae45338583f2663f6d59" Mar 13 01:27:53.148411 master-0 kubenswrapper[19803]: I0313 01:27:53.148185 19803 scope.go:117] "RemoveContainer" containerID="27da9b144d4a4f750c33de749ba64c7d7c2d328ab7a8dc23bb642f52fbaf1fd7" Mar 13 01:27:53.150366 master-0 kubenswrapper[19803]: I0313 01:27:53.149676 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") pod \"7106c6fe-7c8d-45b9-bc5c-521db743663f\" (UID: \"7106c6fe-7c8d-45b9-bc5c-521db743663f\") " Mar 13 01:27:53.150366 master-0 kubenswrapper[19803]: I0313 01:27:53.150203 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:53.150366 master-0 kubenswrapper[19803]: I0313 01:27:53.150286 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:53.150781 master-0 kubenswrapper[19803]: I0313 01:27:53.150370 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-resource-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:53.150781 master-0 kubenswrapper[19803]: I0313 01:27:53.150276 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1453f6461bf5d599ad65a4656343ee91-cert-dir\") pod \"openshift-kube-scheduler-master-0\" (UID: \"1453f6461bf5d599ad65a4656343ee91\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:53.154412 master-0 kubenswrapper[19803]: I0313 01:27:53.154355 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7106c6fe-7c8d-45b9-bc5c-521db743663f" (UID: "7106c6fe-7c8d-45b9-bc5c-521db743663f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:53.213874 master-0 kubenswrapper[19803]: I0313 01:27:53.213679 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/1.log" Mar 13 01:27:53.215705 master-0 kubenswrapper[19803]: I0313 01:27:53.215645 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler/0.log" Mar 13 01:27:53.216726 master-0 kubenswrapper[19803]: I0313 01:27:53.216651 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:53.228901 master-0 kubenswrapper[19803]: I0313 01:27:53.228836 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 13 01:27:53.233818 master-0 kubenswrapper[19803]: I0313 01:27:53.233794 19803 scope.go:117] "RemoveContainer" containerID="36b85103aab608e07fe57ad44e030eaf64a6694fa43ef8b29c17a2a587b80411" Mar 13 01:27:53.251611 master-0 kubenswrapper[19803]: I0313 01:27:53.251532 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7106c6fe-7c8d-45b9-bc5c-521db743663f-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:53.352741 master-0 kubenswrapper[19803]: I0313 01:27:53.352648 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") pod \"1d3d45b6ce1b3764f9927e623a71adf8\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " Mar 13 01:27:53.352741 master-0 kubenswrapper[19803]: I0313 01:27:53.352759 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") pod \"1d3d45b6ce1b3764f9927e623a71adf8\" (UID: \"1d3d45b6ce1b3764f9927e623a71adf8\") " Mar 13 01:27:53.353399 master-0 kubenswrapper[19803]: I0313 01:27:53.352826 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "1d3d45b6ce1b3764f9927e623a71adf8" (UID: "1d3d45b6ce1b3764f9927e623a71adf8"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:27:53.353399 master-0 kubenswrapper[19803]: I0313 01:27:53.353006 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "1d3d45b6ce1b3764f9927e623a71adf8" (UID: "1d3d45b6ce1b3764f9927e623a71adf8"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:27:53.353736 master-0 kubenswrapper[19803]: I0313 01:27:53.353675 19803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:53.353736 master-0 kubenswrapper[19803]: I0313 01:27:53.353722 19803 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/1d3d45b6ce1b3764f9927e623a71adf8-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:54.161168 master-0 kubenswrapper[19803]: I0313 01:27:54.161077 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1d3d45b6ce1b3764f9927e623a71adf8/kube-scheduler-cert-syncer/1.log" Mar 13 01:27:54.164053 master-0 kubenswrapper[19803]: I0313 01:27:54.163987 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:27:54.166753 master-0 kubenswrapper[19803]: I0313 01:27:54.166632 19803 generic.go:334] "Generic (PLEG): container finished" podID="ad71e4d6-32df-4ac5-acd2-e402cfef4611" containerID="0ce59926c04cbb5bf5147c89edac2d32d8a0313612394a159a81b854d56aecd7" exitCode=0 Mar 13 01:27:54.166753 master-0 kubenswrapper[19803]: I0313 01:27:54.166710 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"ad71e4d6-32df-4ac5-acd2-e402cfef4611","Type":"ContainerDied","Data":"0ce59926c04cbb5bf5147c89edac2d32d8a0313612394a159a81b854d56aecd7"} Mar 13 01:27:54.167974 master-0 kubenswrapper[19803]: I0313 01:27:54.167926 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 13 01:27:54.208287 master-0 kubenswrapper[19803]: I0313 01:27:54.208166 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" oldPodUID="1d3d45b6ce1b3764f9927e623a71adf8" podUID="1453f6461bf5d599ad65a4656343ee91" Mar 13 01:27:54.330557 master-0 kubenswrapper[19803]: I0313 01:27:54.330415 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d3d45b6ce1b3764f9927e623a71adf8" path="/var/lib/kubelet/pods/1d3d45b6ce1b3764f9927e623a71adf8/volumes" Mar 13 01:27:55.314707 master-0 kubenswrapper[19803]: I0313 01:27:55.314616 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:27:55.314707 master-0 kubenswrapper[19803]: I0313 01:27:55.314680 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:27:55.340665 master-0 kubenswrapper[19803]: I0313 01:27:55.340466 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 01:27:55.358145 master-0 kubenswrapper[19803]: I0313 01:27:55.356431 19803 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-etcd/etcd-master-0" Mar 13 01:27:55.365013 master-0 kubenswrapper[19803]: I0313 01:27:55.364937 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 01:27:55.400038 master-0 kubenswrapper[19803]: I0313 01:27:55.399657 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-master-0"] Mar 13 01:27:55.613039 master-0 kubenswrapper[19803]: I0313 01:27:55.612915 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:55.668705 master-0 kubenswrapper[19803]: I0313 01:27:55.668577 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-master-0" podStartSLOduration=0.668551032 podStartE2EDuration="668.551032ms" podCreationTimestamp="2026-03-13 01:27:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:27:55.664842063 +0000 UTC m=+623.629989762" watchObservedRunningTime="2026-03-13 01:27:55.668551032 +0000 UTC m=+623.633698721" Mar 13 01:27:55.799138 master-0 kubenswrapper[19803]: I0313 01:27:55.799028 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-var-lock\") pod \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " Mar 13 01:27:55.799138 master-0 kubenswrapper[19803]: I0313 01:27:55.799104 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kubelet-dir\") pod \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " Mar 13 01:27:55.799672 master-0 kubenswrapper[19803]: I0313 01:27:55.799179 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-var-lock" (OuterVolumeSpecName: "var-lock") pod "ad71e4d6-32df-4ac5-acd2-e402cfef4611" (UID: "ad71e4d6-32df-4ac5-acd2-e402cfef4611"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:27:55.799672 master-0 kubenswrapper[19803]: I0313 01:27:55.799255 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kube-api-access\") pod \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\" (UID: \"ad71e4d6-32df-4ac5-acd2-e402cfef4611\") " Mar 13 01:27:55.799672 master-0 kubenswrapper[19803]: I0313 01:27:55.799277 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ad71e4d6-32df-4ac5-acd2-e402cfef4611" (UID: "ad71e4d6-32df-4ac5-acd2-e402cfef4611"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:27:55.801299 master-0 kubenswrapper[19803]: I0313 01:27:55.801239 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:55.801299 master-0 kubenswrapper[19803]: I0313 01:27:55.801275 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:55.802363 master-0 kubenswrapper[19803]: I0313 01:27:55.802308 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ad71e4d6-32df-4ac5-acd2-e402cfef4611" (UID: "ad71e4d6-32df-4ac5-acd2-e402cfef4611"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:55.903132 master-0 kubenswrapper[19803]: I0313 01:27:55.903033 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ad71e4d6-32df-4ac5-acd2-e402cfef4611-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:56.190947 master-0 kubenswrapper[19803]: I0313 01:27:56.190624 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" Mar 13 01:27:56.190947 master-0 kubenswrapper[19803]: I0313 01:27:56.190642 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/installer-5-retry-1-master-0" event={"ID":"ad71e4d6-32df-4ac5-acd2-e402cfef4611","Type":"ContainerDied","Data":"5aea1e02616917b8e583724b63836d5aed3ef0111dbb2d415bd3da0a6185b260"} Mar 13 01:27:56.190947 master-0 kubenswrapper[19803]: I0313 01:27:56.190711 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5aea1e02616917b8e583724b63836d5aed3ef0111dbb2d415bd3da0a6185b260" Mar 13 01:27:56.191653 master-0 kubenswrapper[19803]: I0313 01:27:56.190985 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:27:56.191653 master-0 kubenswrapper[19803]: I0313 01:27:56.191032 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-etcd/etcd-master-0" podUID="2fc3c9a4-7313-4f6b-a00b-85952c9adada" Mar 13 01:27:58.518979 master-0 kubenswrapper[19803]: I0313 01:27:58.518902 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f46d696f9-s9d6s"] Mar 13 01:27:58.520655 master-0 kubenswrapper[19803]: I0313 01:27:58.519175 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" containerID="cri-o://797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681" gracePeriod=30 Mar 13 01:27:58.600906 master-0 kubenswrapper[19803]: I0313 01:27:58.600829 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4"] Mar 13 01:27:58.601144 master-0 kubenswrapper[19803]: I0313 01:27:58.601093 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" podUID="581ff17d-f121-4ece-8e45-81f1f710d163" containerName="route-controller-manager" containerID="cri-o://fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348" gracePeriod=30 Mar 13 01:27:59.054881 master-0 kubenswrapper[19803]: I0313 01:27:59.054841 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:27:59.058143 master-0 kubenswrapper[19803]: I0313 01:27:59.058117 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") pod \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " Mar 13 01:27:59.058261 master-0 kubenswrapper[19803]: I0313 01:27:59.058194 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") pod \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " Mar 13 01:27:59.058363 master-0 kubenswrapper[19803]: I0313 01:27:59.058321 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") pod \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " Mar 13 01:27:59.058440 master-0 kubenswrapper[19803]: I0313 01:27:59.058383 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") pod \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " Mar 13 01:27:59.058440 master-0 kubenswrapper[19803]: I0313 01:27:59.058413 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jvrdt\" (UniqueName: \"kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt\") pod \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\" (UID: \"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7\") " Mar 13 01:27:59.058879 master-0 kubenswrapper[19803]: I0313 01:27:59.058842 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:59.059259 master-0 kubenswrapper[19803]: I0313 01:27:59.058961 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config" (OuterVolumeSpecName: "config") pod "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:59.059259 master-0 kubenswrapper[19803]: I0313 01:27:59.059235 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca" (OuterVolumeSpecName: "client-ca") pod "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:59.061314 master-0 kubenswrapper[19803]: I0313 01:27:59.061269 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:59.061459 master-0 kubenswrapper[19803]: I0313 01:27:59.061354 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt" (OuterVolumeSpecName: "kube-api-access-jvrdt") pod "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" (UID: "d477d4b0-8b36-4ff9-9b56-0e67709b1aa7"). InnerVolumeSpecName "kube-api-access-jvrdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:59.111140 master-0 kubenswrapper[19803]: I0313 01:27:59.111064 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:27:59.160253 master-0 kubenswrapper[19803]: I0313 01:27:59.160121 19803 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-proxy-ca-bundles\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.160253 master-0 kubenswrapper[19803]: I0313 01:27:59.160179 19803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.160253 master-0 kubenswrapper[19803]: I0313 01:27:59.160194 19803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.160253 master-0 kubenswrapper[19803]: I0313 01:27:59.160212 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jvrdt\" (UniqueName: \"kubernetes.io/projected/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-kube-api-access-jvrdt\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.160253 master-0 kubenswrapper[19803]: I0313 01:27:59.160228 19803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.218796 master-0 kubenswrapper[19803]: I0313 01:27:59.218717 19803 generic.go:334] "Generic (PLEG): container finished" podID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerID="797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681" exitCode=0 Mar 13 01:27:59.219039 master-0 kubenswrapper[19803]: I0313 01:27:59.218818 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" event={"ID":"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7","Type":"ContainerDied","Data":"797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681"} Mar 13 01:27:59.219039 master-0 kubenswrapper[19803]: I0313 01:27:59.218863 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" event={"ID":"d477d4b0-8b36-4ff9-9b56-0e67709b1aa7","Type":"ContainerDied","Data":"aed424610f368f2ab3bbdf35a68a20b721e3a40783a95dd4a322c10d00ffa3aa"} Mar 13 01:27:59.219039 master-0 kubenswrapper[19803]: I0313 01:27:59.218888 19803 scope.go:117] "RemoveContainer" containerID="797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681" Mar 13 01:27:59.219352 master-0 kubenswrapper[19803]: I0313 01:27:59.219292 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7f46d696f9-s9d6s" Mar 13 01:27:59.223696 master-0 kubenswrapper[19803]: I0313 01:27:59.223602 19803 generic.go:334] "Generic (PLEG): container finished" podID="581ff17d-f121-4ece-8e45-81f1f710d163" containerID="fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348" exitCode=0 Mar 13 01:27:59.223896 master-0 kubenswrapper[19803]: I0313 01:27:59.223794 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" event={"ID":"581ff17d-f121-4ece-8e45-81f1f710d163","Type":"ContainerDied","Data":"fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348"} Mar 13 01:27:59.223954 master-0 kubenswrapper[19803]: I0313 01:27:59.223915 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" event={"ID":"581ff17d-f121-4ece-8e45-81f1f710d163","Type":"ContainerDied","Data":"14fb0b2eb240219320e6992cc4659cd81f4b0471ff79cf3cf2e89fa8f1d605a0"} Mar 13 01:27:59.223998 master-0 kubenswrapper[19803]: I0313 01:27:59.223963 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4" Mar 13 01:27:59.249228 master-0 kubenswrapper[19803]: I0313 01:27:59.249179 19803 scope.go:117] "RemoveContainer" containerID="2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0" Mar 13 01:27:59.261559 master-0 kubenswrapper[19803]: I0313 01:27:59.261462 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgz5w\" (UniqueName: \"kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w\") pod \"581ff17d-f121-4ece-8e45-81f1f710d163\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " Mar 13 01:27:59.261673 master-0 kubenswrapper[19803]: I0313 01:27:59.261580 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") pod \"581ff17d-f121-4ece-8e45-81f1f710d163\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " Mar 13 01:27:59.261724 master-0 kubenswrapper[19803]: I0313 01:27:59.261674 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") pod \"581ff17d-f121-4ece-8e45-81f1f710d163\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " Mar 13 01:27:59.261772 master-0 kubenswrapper[19803]: I0313 01:27:59.261742 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") pod \"581ff17d-f121-4ece-8e45-81f1f710d163\" (UID: \"581ff17d-f121-4ece-8e45-81f1f710d163\") " Mar 13 01:27:59.262938 master-0 kubenswrapper[19803]: I0313 01:27:59.262819 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config" (OuterVolumeSpecName: "config") pod "581ff17d-f121-4ece-8e45-81f1f710d163" (UID: "581ff17d-f121-4ece-8e45-81f1f710d163"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:59.263285 master-0 kubenswrapper[19803]: I0313 01:27:59.263254 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca" (OuterVolumeSpecName: "client-ca") pod "581ff17d-f121-4ece-8e45-81f1f710d163" (UID: "581ff17d-f121-4ece-8e45-81f1f710d163"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:27:59.266469 master-0 kubenswrapper[19803]: I0313 01:27:59.266425 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w" (OuterVolumeSpecName: "kube-api-access-pgz5w") pod "581ff17d-f121-4ece-8e45-81f1f710d163" (UID: "581ff17d-f121-4ece-8e45-81f1f710d163"). InnerVolumeSpecName "kube-api-access-pgz5w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:27:59.272937 master-0 kubenswrapper[19803]: I0313 01:27:59.272907 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "581ff17d-f121-4ece-8e45-81f1f710d163" (UID: "581ff17d-f121-4ece-8e45-81f1f710d163"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:27:59.275935 master-0 kubenswrapper[19803]: I0313 01:27:59.275842 19803 scope.go:117] "RemoveContainer" containerID="797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681" Mar 13 01:27:59.276343 master-0 kubenswrapper[19803]: E0313 01:27:59.276301 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681\": container with ID starting with 797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681 not found: ID does not exist" containerID="797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681" Mar 13 01:27:59.276485 master-0 kubenswrapper[19803]: I0313 01:27:59.276450 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681"} err="failed to get container status \"797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681\": rpc error: code = NotFound desc = could not find container \"797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681\": container with ID starting with 797ff47f78cf2087a49344413ef18cf56fa887bc2657b21857a5c5bc15e1c681 not found: ID does not exist" Mar 13 01:27:59.276681 master-0 kubenswrapper[19803]: I0313 01:27:59.276662 19803 scope.go:117] "RemoveContainer" containerID="2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0" Mar 13 01:27:59.277216 master-0 kubenswrapper[19803]: E0313 01:27:59.277190 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0\": container with ID starting with 2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0 not found: ID does not exist" containerID="2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0" Mar 13 01:27:59.277304 master-0 kubenswrapper[19803]: I0313 01:27:59.277222 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0"} err="failed to get container status \"2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0\": rpc error: code = NotFound desc = could not find container \"2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0\": container with ID starting with 2b68b0bc8f28fb1d6f1763ee543c293018c538560669a8098c958ea64897d3d0 not found: ID does not exist" Mar 13 01:27:59.277304 master-0 kubenswrapper[19803]: I0313 01:27:59.277245 19803 scope.go:117] "RemoveContainer" containerID="fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348" Mar 13 01:27:59.280662 master-0 kubenswrapper[19803]: I0313 01:27:59.279379 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7f46d696f9-s9d6s"] Mar 13 01:27:59.287968 master-0 kubenswrapper[19803]: I0313 01:27:59.287935 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7f46d696f9-s9d6s"] Mar 13 01:27:59.303679 master-0 kubenswrapper[19803]: I0313 01:27:59.303649 19803 scope.go:117] "RemoveContainer" containerID="fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348" Mar 13 01:27:59.304398 master-0 kubenswrapper[19803]: E0313 01:27:59.304363 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348\": container with ID starting with fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348 not found: ID does not exist" containerID="fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348" Mar 13 01:27:59.304590 master-0 kubenswrapper[19803]: I0313 01:27:59.304547 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348"} err="failed to get container status \"fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348\": rpc error: code = NotFound desc = could not find container \"fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348\": container with ID starting with fe57cc528fb0c8adbd1f54f71dd1164181770be17be3724e8d71e64b9b902348 not found: ID does not exist" Mar 13 01:27:59.365250 master-0 kubenswrapper[19803]: I0313 01:27:59.365206 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pgz5w\" (UniqueName: \"kubernetes.io/projected/581ff17d-f121-4ece-8e45-81f1f710d163-kube-api-access-pgz5w\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.365624 master-0 kubenswrapper[19803]: I0313 01:27:59.365605 19803 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/581ff17d-f121-4ece-8e45-81f1f710d163-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.365743 master-0 kubenswrapper[19803]: I0313 01:27:59.365723 19803 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-client-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.365855 master-0 kubenswrapper[19803]: I0313 01:27:59.365839 19803 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/581ff17d-f121-4ece-8e45-81f1f710d163-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:27:59.584736 master-0 kubenswrapper[19803]: I0313 01:27:59.583801 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4"] Mar 13 01:27:59.589394 master-0 kubenswrapper[19803]: I0313 01:27:59.589340 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6cc78fd984-g55t4"] Mar 13 01:28:00.329198 master-0 kubenswrapper[19803]: I0313 01:28:00.329105 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="581ff17d-f121-4ece-8e45-81f1f710d163" path="/var/lib/kubelet/pods/581ff17d-f121-4ece-8e45-81f1f710d163/volumes" Mar 13 01:28:00.330406 master-0 kubenswrapper[19803]: I0313 01:28:00.330360 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" path="/var/lib/kubelet/pods/d477d4b0-8b36-4ff9-9b56-0e67709b1aa7/volumes" Mar 13 01:28:05.314684 master-0 kubenswrapper[19803]: I0313 01:28:05.314588 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:28:05.335328 master-0 kubenswrapper[19803]: I0313 01:28:05.335262 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="9004f4c7-5b18-4737-9a28-b7cf03c57a67" Mar 13 01:28:05.335328 master-0 kubenswrapper[19803]: I0313 01:28:05.335320 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podUID="9004f4c7-5b18-4737-9a28-b7cf03c57a67" Mar 13 01:28:05.347475 master-0 kubenswrapper[19803]: I0313 01:28:05.347421 19803 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:28:05.350171 master-0 kubenswrapper[19803]: I0313 01:28:05.350109 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 01:28:05.352074 master-0 kubenswrapper[19803]: I0313 01:28:05.351972 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 01:28:05.368743 master-0 kubenswrapper[19803]: I0313 01:28:05.368671 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:28:05.380455 master-0 kubenswrapper[19803]: I0313 01:28:05.380378 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-master-0"] Mar 13 01:28:06.298643 master-0 kubenswrapper[19803]: I0313 01:28:06.297801 19803 generic.go:334] "Generic (PLEG): container finished" podID="1453f6461bf5d599ad65a4656343ee91" containerID="3cb52a36cbb9bf5d3b92e0b11285c594b3148e3dbfd494e637e9cbee946b40ac" exitCode=0 Mar 13 01:28:06.298643 master-0 kubenswrapper[19803]: I0313 01:28:06.297899 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerDied","Data":"3cb52a36cbb9bf5d3b92e0b11285c594b3148e3dbfd494e637e9cbee946b40ac"} Mar 13 01:28:06.298643 master-0 kubenswrapper[19803]: I0313 01:28:06.297956 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"651f4475233214aba022e1be883664d0792658770993601c4a761499c9309c1f"} Mar 13 01:28:07.308201 master-0 kubenswrapper[19803]: I0313 01:28:07.308085 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"936d1ab9e5e457d5d8dfe601248bed9ee6ea3823be66acb90c4335a4e775f66a"} Mar 13 01:28:07.308201 master-0 kubenswrapper[19803]: I0313 01:28:07.308162 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"6b05f2a1eb48fa45c5b8cb14288fc92a6fa36e2b2fe595449e30a5397c7b5a24"} Mar 13 01:28:07.308201 master-0 kubenswrapper[19803]: I0313 01:28:07.308180 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" event={"ID":"1453f6461bf5d599ad65a4656343ee91","Type":"ContainerStarted","Data":"7f363410fad688e02f41f3d55aeb1e3fe4e7f486d5292e8c3a424748100eeb9f"} Mar 13 01:28:07.309369 master-0 kubenswrapper[19803]: I0313 01:28:07.309266 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:28:07.337616 master-0 kubenswrapper[19803]: I0313 01:28:07.337487 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" podStartSLOduration=2.33746078 podStartE2EDuration="2.33746078s" podCreationTimestamp="2026-03-13 01:28:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:28:07.336773594 +0000 UTC m=+635.301921283" watchObservedRunningTime="2026-03-13 01:28:07.33746078 +0000 UTC m=+635.302608459" Mar 13 01:28:25.898655 master-0 kubenswrapper[19803]: I0313 01:28:25.898503 19803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 01:28:25.899822 master-0 kubenswrapper[19803]: I0313 01:28:25.898941 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-cert-syncer" containerID="cri-o://3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" gracePeriod=30 Mar 13 01:28:25.899822 master-0 kubenswrapper[19803]: I0313 01:28:25.898987 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" containerID="cri-o://2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" gracePeriod=30 Mar 13 01:28:25.899822 master-0 kubenswrapper[19803]: I0313 01:28:25.899013 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" containerID="cri-o://a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" gracePeriod=30 Mar 13 01:28:25.899822 master-0 kubenswrapper[19803]: I0313 01:28:25.898879 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-recovery-controller" containerID="cri-o://2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" gracePeriod=30 Mar 13 01:28:25.918621 master-0 kubenswrapper[19803]: I0313 01:28:25.918072 19803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 01:28:25.918621 master-0 kubenswrapper[19803]: E0313 01:28:25.918502 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ad71e4d6-32df-4ac5-acd2-e402cfef4611" containerName="installer" Mar 13 01:28:25.918621 master-0 kubenswrapper[19803]: I0313 01:28:25.918550 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="ad71e4d6-32df-4ac5-acd2-e402cfef4611" containerName="installer" Mar 13 01:28:25.918621 master-0 kubenswrapper[19803]: E0313 01:28:25.918582 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-cert-syncer" Mar 13 01:28:25.918621 master-0 kubenswrapper[19803]: I0313 01:28:25.918593 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-cert-syncer" Mar 13 01:28:25.918621 master-0 kubenswrapper[19803]: E0313 01:28:25.918611 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-cert-syncer" Mar 13 01:28:25.918621 master-0 kubenswrapper[19803]: I0313 01:28:25.918624 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-cert-syncer" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918640 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918651 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918663 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918674 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918707 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918718 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918739 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918749 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918767 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918780 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918795 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918806 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918831 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918841 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918855 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918866 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918880 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-recovery-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918891 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-recovery-controller" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: E0313 01:28:25.918916 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="581ff17d-f121-4ece-8e45-81f1f710d163" containerName="route-controller-manager" Mar 13 01:28:25.919112 master-0 kubenswrapper[19803]: I0313 01:28:25.918927 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="581ff17d-f121-4ece-8e45-81f1f710d163" containerName="route-controller-manager" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919131 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-recovery-controller" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919159 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919176 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919197 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919218 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-cert-syncer" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919239 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919257 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919270 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="kube-controller-manager-cert-syncer" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919289 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="581ff17d-f121-4ece-8e45-81f1f710d163" containerName="route-controller-manager" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919308 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919330 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919348 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="ad71e4d6-32df-4ac5-acd2-e402cfef4611" containerName="installer" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919371 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="d477d4b0-8b36-4ff9-9b56-0e67709b1aa7" containerName="controller-manager" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: E0313 01:28:25.919572 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919589 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.920238 master-0 kubenswrapper[19803]: I0313 01:28:25.919798 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="24e04786030519cf5fd9f600ea6710e9" containerName="cluster-policy-controller" Mar 13 01:28:25.979557 master-0 kubenswrapper[19803]: I0313 01:28:25.978330 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/961b4d54fbc741f185dfae043b7eaea5-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"961b4d54fbc741f185dfae043b7eaea5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:25.979557 master-0 kubenswrapper[19803]: I0313 01:28:25.978396 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/961b4d54fbc741f185dfae043b7eaea5-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"961b4d54fbc741f185dfae043b7eaea5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:26.079694 master-0 kubenswrapper[19803]: I0313 01:28:26.079637 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/961b4d54fbc741f185dfae043b7eaea5-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"961b4d54fbc741f185dfae043b7eaea5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:26.079694 master-0 kubenswrapper[19803]: I0313 01:28:26.079697 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/961b4d54fbc741f185dfae043b7eaea5-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"961b4d54fbc741f185dfae043b7eaea5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:26.079841 master-0 kubenswrapper[19803]: I0313 01:28:26.079793 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/961b4d54fbc741f185dfae043b7eaea5-cert-dir\") pod \"kube-controller-manager-master-0\" (UID: \"961b4d54fbc741f185dfae043b7eaea5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:26.079906 master-0 kubenswrapper[19803]: I0313 01:28:26.079887 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/961b4d54fbc741f185dfae043b7eaea5-resource-dir\") pod \"kube-controller-manager-master-0\" (UID: \"961b4d54fbc741f185dfae043b7eaea5\") " pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:26.127566 master-0 kubenswrapper[19803]: I0313 01:28:26.127501 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager-cert-syncer/1.log" Mar 13 01:28:26.128474 master-0 kubenswrapper[19803]: I0313 01:28:26.128447 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/3.log" Mar 13 01:28:26.130693 master-0 kubenswrapper[19803]: I0313 01:28:26.130667 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager-cert-syncer/0.log" Mar 13 01:28:26.131083 master-0 kubenswrapper[19803]: I0313 01:28:26.131055 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:28:26.131144 master-0 kubenswrapper[19803]: I0313 01:28:26.131136 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:26.134156 master-0 kubenswrapper[19803]: I0313 01:28:26.134125 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="24e04786030519cf5fd9f600ea6710e9" podUID="961b4d54fbc741f185dfae043b7eaea5" Mar 13 01:28:26.181378 master-0 kubenswrapper[19803]: I0313 01:28:26.181225 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") pod \"24e04786030519cf5fd9f600ea6710e9\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " Mar 13 01:28:26.181378 master-0 kubenswrapper[19803]: I0313 01:28:26.181342 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") pod \"24e04786030519cf5fd9f600ea6710e9\" (UID: \"24e04786030519cf5fd9f600ea6710e9\") " Mar 13 01:28:26.181378 master-0 kubenswrapper[19803]: I0313 01:28:26.181368 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "24e04786030519cf5fd9f600ea6710e9" (UID: "24e04786030519cf5fd9f600ea6710e9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:28:26.181770 master-0 kubenswrapper[19803]: I0313 01:28:26.181448 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "24e04786030519cf5fd9f600ea6710e9" (UID: "24e04786030519cf5fd9f600ea6710e9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:28:26.181770 master-0 kubenswrapper[19803]: I0313 01:28:26.181676 19803 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:28:26.181770 master-0 kubenswrapper[19803]: I0313 01:28:26.181690 19803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/24e04786030519cf5fd9f600ea6710e9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:28:26.322285 master-0 kubenswrapper[19803]: I0313 01:28:26.322182 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24e04786030519cf5fd9f600ea6710e9" path="/var/lib/kubelet/pods/24e04786030519cf5fd9f600ea6710e9/volumes" Mar 13 01:28:26.503777 master-0 kubenswrapper[19803]: I0313 01:28:26.503681 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager-cert-syncer/1.log" Mar 13 01:28:26.505317 master-0 kubenswrapper[19803]: I0313 01:28:26.505239 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/cluster-policy-controller/3.log" Mar 13 01:28:26.509647 master-0 kubenswrapper[19803]: I0313 01:28:26.509587 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager-cert-syncer/0.log" Mar 13 01:28:26.511171 master-0 kubenswrapper[19803]: I0313 01:28:26.511104 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_24e04786030519cf5fd9f600ea6710e9/kube-controller-manager/0.log" Mar 13 01:28:26.511336 master-0 kubenswrapper[19803]: I0313 01:28:26.511192 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" exitCode=2 Mar 13 01:28:26.511336 master-0 kubenswrapper[19803]: I0313 01:28:26.511229 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" exitCode=0 Mar 13 01:28:26.511336 master-0 kubenswrapper[19803]: I0313 01:28:26.511248 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" exitCode=0 Mar 13 01:28:26.511336 master-0 kubenswrapper[19803]: I0313 01:28:26.511265 19803 generic.go:334] "Generic (PLEG): container finished" podID="24e04786030519cf5fd9f600ea6710e9" containerID="2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" exitCode=0 Mar 13 01:28:26.511755 master-0 kubenswrapper[19803]: I0313 01:28:26.511394 19803 scope.go:117] "RemoveContainer" containerID="3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" Mar 13 01:28:26.513499 master-0 kubenswrapper[19803]: I0313 01:28:26.512715 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:26.515637 master-0 kubenswrapper[19803]: I0313 01:28:26.515570 19803 generic.go:334] "Generic (PLEG): container finished" podID="943a993e-2a88-4bda-832f-d03e9d2d08d8" containerID="0c57179f9f1188dc628485baf6939e006097c3e65ac118069477873d8409a413" exitCode=0 Mar 13 01:28:26.515766 master-0 kubenswrapper[19803]: I0313 01:28:26.515668 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"943a993e-2a88-4bda-832f-d03e9d2d08d8","Type":"ContainerDied","Data":"0c57179f9f1188dc628485baf6939e006097c3e65ac118069477873d8409a413"} Mar 13 01:28:26.518982 master-0 kubenswrapper[19803]: I0313 01:28:26.518908 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="24e04786030519cf5fd9f600ea6710e9" podUID="961b4d54fbc741f185dfae043b7eaea5" Mar 13 01:28:26.556269 master-0 kubenswrapper[19803]: I0313 01:28:26.556182 19803 scope.go:117] "RemoveContainer" containerID="2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" Mar 13 01:28:26.557416 master-0 kubenswrapper[19803]: I0313 01:28:26.557330 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" oldPodUID="24e04786030519cf5fd9f600ea6710e9" podUID="961b4d54fbc741f185dfae043b7eaea5" Mar 13 01:28:26.583459 master-0 kubenswrapper[19803]: I0313 01:28:26.583394 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:28:26.605051 master-0 kubenswrapper[19803]: I0313 01:28:26.604999 19803 scope.go:117] "RemoveContainer" containerID="a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" Mar 13 01:28:26.630589 master-0 kubenswrapper[19803]: I0313 01:28:26.630495 19803 scope.go:117] "RemoveContainer" containerID="2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" Mar 13 01:28:26.658875 master-0 kubenswrapper[19803]: I0313 01:28:26.658802 19803 scope.go:117] "RemoveContainer" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" Mar 13 01:28:26.675652 master-0 kubenswrapper[19803]: I0313 01:28:26.675593 19803 scope.go:117] "RemoveContainer" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" Mar 13 01:28:26.701559 master-0 kubenswrapper[19803]: I0313 01:28:26.701483 19803 scope.go:117] "RemoveContainer" containerID="3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" Mar 13 01:28:26.702199 master-0 kubenswrapper[19803]: E0313 01:28:26.702141 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": container with ID starting with 3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4 not found: ID does not exist" containerID="3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" Mar 13 01:28:26.702285 master-0 kubenswrapper[19803]: I0313 01:28:26.702198 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4"} err="failed to get container status \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": rpc error: code = NotFound desc = could not find container \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": container with ID starting with 3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4 not found: ID does not exist" Mar 13 01:28:26.702285 master-0 kubenswrapper[19803]: I0313 01:28:26.702225 19803 scope.go:117] "RemoveContainer" containerID="2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" Mar 13 01:28:26.702583 master-0 kubenswrapper[19803]: E0313 01:28:26.702540 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": container with ID starting with 2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064 not found: ID does not exist" containerID="2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" Mar 13 01:28:26.702583 master-0 kubenswrapper[19803]: I0313 01:28:26.702574 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064"} err="failed to get container status \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": rpc error: code = NotFound desc = could not find container \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": container with ID starting with 2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064 not found: ID does not exist" Mar 13 01:28:26.702741 master-0 kubenswrapper[19803]: I0313 01:28:26.702591 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:28:26.703178 master-0 kubenswrapper[19803]: E0313 01:28:26.703136 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": container with ID starting with a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53 not found: ID does not exist" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:28:26.703178 master-0 kubenswrapper[19803]: I0313 01:28:26.703163 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53"} err="failed to get container status \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": rpc error: code = NotFound desc = could not find container \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": container with ID starting with a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53 not found: ID does not exist" Mar 13 01:28:26.703178 master-0 kubenswrapper[19803]: I0313 01:28:26.703177 19803 scope.go:117] "RemoveContainer" containerID="a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" Mar 13 01:28:26.703578 master-0 kubenswrapper[19803]: E0313 01:28:26.703496 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": container with ID starting with a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635 not found: ID does not exist" containerID="a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" Mar 13 01:28:26.703578 master-0 kubenswrapper[19803]: I0313 01:28:26.703565 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635"} err="failed to get container status \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": rpc error: code = NotFound desc = could not find container \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": container with ID starting with a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635 not found: ID does not exist" Mar 13 01:28:26.703742 master-0 kubenswrapper[19803]: I0313 01:28:26.703583 19803 scope.go:117] "RemoveContainer" containerID="2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" Mar 13 01:28:26.703937 master-0 kubenswrapper[19803]: E0313 01:28:26.703907 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": container with ID starting with 2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa not found: ID does not exist" containerID="2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" Mar 13 01:28:26.704022 master-0 kubenswrapper[19803]: I0313 01:28:26.703954 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa"} err="failed to get container status \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": rpc error: code = NotFound desc = could not find container \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": container with ID starting with 2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa not found: ID does not exist" Mar 13 01:28:26.704022 master-0 kubenswrapper[19803]: I0313 01:28:26.703970 19803 scope.go:117] "RemoveContainer" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" Mar 13 01:28:26.704288 master-0 kubenswrapper[19803]: E0313 01:28:26.704251 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": container with ID starting with 1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b not found: ID does not exist" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" Mar 13 01:28:26.704376 master-0 kubenswrapper[19803]: I0313 01:28:26.704281 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b"} err="failed to get container status \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": rpc error: code = NotFound desc = could not find container \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": container with ID starting with 1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b not found: ID does not exist" Mar 13 01:28:26.704376 master-0 kubenswrapper[19803]: I0313 01:28:26.704302 19803 scope.go:117] "RemoveContainer" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" Mar 13 01:28:26.704791 master-0 kubenswrapper[19803]: E0313 01:28:26.704726 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": container with ID starting with 5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82 not found: ID does not exist" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" Mar 13 01:28:26.704892 master-0 kubenswrapper[19803]: I0313 01:28:26.704818 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82"} err="failed to get container status \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": rpc error: code = NotFound desc = could not find container \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": container with ID starting with 5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82 not found: ID does not exist" Mar 13 01:28:26.704892 master-0 kubenswrapper[19803]: I0313 01:28:26.704881 19803 scope.go:117] "RemoveContainer" containerID="3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" Mar 13 01:28:26.705482 master-0 kubenswrapper[19803]: I0313 01:28:26.705404 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4"} err="failed to get container status \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": rpc error: code = NotFound desc = could not find container \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": container with ID starting with 3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4 not found: ID does not exist" Mar 13 01:28:26.705661 master-0 kubenswrapper[19803]: I0313 01:28:26.705633 19803 scope.go:117] "RemoveContainer" containerID="2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" Mar 13 01:28:26.706321 master-0 kubenswrapper[19803]: I0313 01:28:26.706281 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064"} err="failed to get container status \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": rpc error: code = NotFound desc = could not find container \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": container with ID starting with 2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064 not found: ID does not exist" Mar 13 01:28:26.706321 master-0 kubenswrapper[19803]: I0313 01:28:26.706309 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:28:26.707008 master-0 kubenswrapper[19803]: I0313 01:28:26.706957 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53"} err="failed to get container status \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": rpc error: code = NotFound desc = could not find container \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": container with ID starting with a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53 not found: ID does not exist" Mar 13 01:28:26.707008 master-0 kubenswrapper[19803]: I0313 01:28:26.707002 19803 scope.go:117] "RemoveContainer" containerID="a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" Mar 13 01:28:26.707401 master-0 kubenswrapper[19803]: I0313 01:28:26.707365 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635"} err="failed to get container status \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": rpc error: code = NotFound desc = could not find container \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": container with ID starting with a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635 not found: ID does not exist" Mar 13 01:28:26.707532 master-0 kubenswrapper[19803]: I0313 01:28:26.707411 19803 scope.go:117] "RemoveContainer" containerID="2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" Mar 13 01:28:26.708421 master-0 kubenswrapper[19803]: I0313 01:28:26.708356 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa"} err="failed to get container status \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": rpc error: code = NotFound desc = could not find container \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": container with ID starting with 2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa not found: ID does not exist" Mar 13 01:28:26.708421 master-0 kubenswrapper[19803]: I0313 01:28:26.708401 19803 scope.go:117] "RemoveContainer" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" Mar 13 01:28:26.708799 master-0 kubenswrapper[19803]: I0313 01:28:26.708760 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b"} err="failed to get container status \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": rpc error: code = NotFound desc = could not find container \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": container with ID starting with 1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b not found: ID does not exist" Mar 13 01:28:26.708799 master-0 kubenswrapper[19803]: I0313 01:28:26.708785 19803 scope.go:117] "RemoveContainer" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" Mar 13 01:28:26.709422 master-0 kubenswrapper[19803]: I0313 01:28:26.709310 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82"} err="failed to get container status \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": rpc error: code = NotFound desc = could not find container \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": container with ID starting with 5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82 not found: ID does not exist" Mar 13 01:28:26.709422 master-0 kubenswrapper[19803]: I0313 01:28:26.709404 19803 scope.go:117] "RemoveContainer" containerID="3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" Mar 13 01:28:26.710273 master-0 kubenswrapper[19803]: I0313 01:28:26.710164 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4"} err="failed to get container status \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": rpc error: code = NotFound desc = could not find container \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": container with ID starting with 3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4 not found: ID does not exist" Mar 13 01:28:26.710273 master-0 kubenswrapper[19803]: I0313 01:28:26.710207 19803 scope.go:117] "RemoveContainer" containerID="2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" Mar 13 01:28:26.710859 master-0 kubenswrapper[19803]: I0313 01:28:26.710790 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064"} err="failed to get container status \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": rpc error: code = NotFound desc = could not find container \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": container with ID starting with 2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064 not found: ID does not exist" Mar 13 01:28:26.711079 master-0 kubenswrapper[19803]: I0313 01:28:26.711042 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:28:26.711684 master-0 kubenswrapper[19803]: I0313 01:28:26.711642 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53"} err="failed to get container status \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": rpc error: code = NotFound desc = could not find container \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": container with ID starting with a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53 not found: ID does not exist" Mar 13 01:28:26.711843 master-0 kubenswrapper[19803]: I0313 01:28:26.711671 19803 scope.go:117] "RemoveContainer" containerID="a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" Mar 13 01:28:26.712143 master-0 kubenswrapper[19803]: I0313 01:28:26.712060 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635"} err="failed to get container status \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": rpc error: code = NotFound desc = could not find container \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": container with ID starting with a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635 not found: ID does not exist" Mar 13 01:28:26.712143 master-0 kubenswrapper[19803]: I0313 01:28:26.712131 19803 scope.go:117] "RemoveContainer" containerID="2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" Mar 13 01:28:26.712947 master-0 kubenswrapper[19803]: I0313 01:28:26.712909 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa"} err="failed to get container status \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": rpc error: code = NotFound desc = could not find container \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": container with ID starting with 2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa not found: ID does not exist" Mar 13 01:28:26.712947 master-0 kubenswrapper[19803]: I0313 01:28:26.712936 19803 scope.go:117] "RemoveContainer" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" Mar 13 01:28:26.713776 master-0 kubenswrapper[19803]: I0313 01:28:26.713488 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b"} err="failed to get container status \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": rpc error: code = NotFound desc = could not find container \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": container with ID starting with 1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b not found: ID does not exist" Mar 13 01:28:26.713776 master-0 kubenswrapper[19803]: I0313 01:28:26.713618 19803 scope.go:117] "RemoveContainer" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" Mar 13 01:28:26.714133 master-0 kubenswrapper[19803]: I0313 01:28:26.714050 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82"} err="failed to get container status \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": rpc error: code = NotFound desc = could not find container \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": container with ID starting with 5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82 not found: ID does not exist" Mar 13 01:28:26.714133 master-0 kubenswrapper[19803]: I0313 01:28:26.714077 19803 scope.go:117] "RemoveContainer" containerID="3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4" Mar 13 01:28:26.714427 master-0 kubenswrapper[19803]: I0313 01:28:26.714372 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4"} err="failed to get container status \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": rpc error: code = NotFound desc = could not find container \"3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4\": container with ID starting with 3dca506a0e895927ce3142a0ba376b3495a9e5c34af52b2fa20c0697748f27e4 not found: ID does not exist" Mar 13 01:28:26.714569 master-0 kubenswrapper[19803]: I0313 01:28:26.714426 19803 scope.go:117] "RemoveContainer" containerID="2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064" Mar 13 01:28:26.714926 master-0 kubenswrapper[19803]: I0313 01:28:26.714885 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064"} err="failed to get container status \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": rpc error: code = NotFound desc = could not find container \"2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064\": container with ID starting with 2a9f04b03465469cc38aa5f8073e515e49b294e0b55af4a44996b9a4bdd48064 not found: ID does not exist" Mar 13 01:28:26.714926 master-0 kubenswrapper[19803]: I0313 01:28:26.714912 19803 scope.go:117] "RemoveContainer" containerID="a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53" Mar 13 01:28:26.715317 master-0 kubenswrapper[19803]: I0313 01:28:26.715189 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53"} err="failed to get container status \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": rpc error: code = NotFound desc = could not find container \"a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53\": container with ID starting with a502b5e9466f9db365a3620e9e61c6f562afd48001c4641813c9b88fb1678a53 not found: ID does not exist" Mar 13 01:28:26.715317 master-0 kubenswrapper[19803]: I0313 01:28:26.715211 19803 scope.go:117] "RemoveContainer" containerID="a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635" Mar 13 01:28:26.715610 master-0 kubenswrapper[19803]: I0313 01:28:26.715493 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635"} err="failed to get container status \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": rpc error: code = NotFound desc = could not find container \"a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635\": container with ID starting with a3a3d2ad688af8f836804a3f31a160e6d7038d92504a64a69b85581aa0b39635 not found: ID does not exist" Mar 13 01:28:26.715610 master-0 kubenswrapper[19803]: I0313 01:28:26.715578 19803 scope.go:117] "RemoveContainer" containerID="2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa" Mar 13 01:28:26.716147 master-0 kubenswrapper[19803]: I0313 01:28:26.715960 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa"} err="failed to get container status \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": rpc error: code = NotFound desc = could not find container \"2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa\": container with ID starting with 2f50eaf795f8beccac20ef63125046126016c537a2ff5c1358948d588a1745aa not found: ID does not exist" Mar 13 01:28:26.716147 master-0 kubenswrapper[19803]: I0313 01:28:26.716017 19803 scope.go:117] "RemoveContainer" containerID="1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b" Mar 13 01:28:26.716420 master-0 kubenswrapper[19803]: I0313 01:28:26.716365 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b"} err="failed to get container status \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": rpc error: code = NotFound desc = could not find container \"1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b\": container with ID starting with 1160c0d92bde36086a048b5eec2e5faf99342d2c664536be10737beeabc9208b not found: ID does not exist" Mar 13 01:28:26.716420 master-0 kubenswrapper[19803]: I0313 01:28:26.716411 19803 scope.go:117] "RemoveContainer" containerID="5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82" Mar 13 01:28:26.716988 master-0 kubenswrapper[19803]: I0313 01:28:26.716901 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82"} err="failed to get container status \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": rpc error: code = NotFound desc = could not find container \"5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82\": container with ID starting with 5e195f86338e26675afc45418de7f256f7e36db0c91449b8e579235563f9cc82 not found: ID does not exist" Mar 13 01:28:27.928746 master-0 kubenswrapper[19803]: I0313 01:28:27.928693 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:28:28.011062 master-0 kubenswrapper[19803]: I0313 01:28:28.010969 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-var-lock\") pod \"943a993e-2a88-4bda-832f-d03e9d2d08d8\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " Mar 13 01:28:28.011062 master-0 kubenswrapper[19803]: I0313 01:28:28.011026 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-kubelet-dir\") pod \"943a993e-2a88-4bda-832f-d03e9d2d08d8\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " Mar 13 01:28:28.011436 master-0 kubenswrapper[19803]: I0313 01:28:28.011148 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/943a993e-2a88-4bda-832f-d03e9d2d08d8-kube-api-access\") pod \"943a993e-2a88-4bda-832f-d03e9d2d08d8\" (UID: \"943a993e-2a88-4bda-832f-d03e9d2d08d8\") " Mar 13 01:28:28.011672 master-0 kubenswrapper[19803]: I0313 01:28:28.011594 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-var-lock" (OuterVolumeSpecName: "var-lock") pod "943a993e-2a88-4bda-832f-d03e9d2d08d8" (UID: "943a993e-2a88-4bda-832f-d03e9d2d08d8"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:28:28.011672 master-0 kubenswrapper[19803]: I0313 01:28:28.011603 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "943a993e-2a88-4bda-832f-d03e9d2d08d8" (UID: "943a993e-2a88-4bda-832f-d03e9d2d08d8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:28:28.015845 master-0 kubenswrapper[19803]: I0313 01:28:28.015803 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/943a993e-2a88-4bda-832f-d03e9d2d08d8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "943a993e-2a88-4bda-832f-d03e9d2d08d8" (UID: "943a993e-2a88-4bda-832f-d03e9d2d08d8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:28:28.112673 master-0 kubenswrapper[19803]: I0313 01:28:28.112590 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/943a993e-2a88-4bda-832f-d03e9d2d08d8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:28:28.112673 master-0 kubenswrapper[19803]: I0313 01:28:28.112655 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:28:28.112673 master-0 kubenswrapper[19803]: I0313 01:28:28.112669 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/943a993e-2a88-4bda-832f-d03e9d2d08d8-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:28:28.561547 master-0 kubenswrapper[19803]: I0313 01:28:28.561402 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/installer-4-master-0" event={"ID":"943a993e-2a88-4bda-832f-d03e9d2d08d8","Type":"ContainerDied","Data":"5847f166089a0b85efb90458337282fc04c8e6c7930d55283bcc324d11078c37"} Mar 13 01:28:28.561547 master-0 kubenswrapper[19803]: I0313 01:28:28.561522 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/installer-4-master-0" Mar 13 01:28:28.562098 master-0 kubenswrapper[19803]: I0313 01:28:28.561538 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5847f166089a0b85efb90458337282fc04c8e6c7930d55283bcc324d11078c37" Mar 13 01:28:37.773470 master-0 kubenswrapper[19803]: I0313 01:28:37.773380 19803 scope.go:117] "RemoveContainer" containerID="44c7d80aa4aadd7ed9cfa67d8c3f0e0defda54140db09140424d6dcf8461fe9e" Mar 13 01:28:37.798934 master-0 kubenswrapper[19803]: I0313 01:28:37.798840 19803 scope.go:117] "RemoveContainer" containerID="10e54ccf1c79035f275fa3427f827eeb618189c70d330140baae622cfa30b962" Mar 13 01:28:41.313866 master-0 kubenswrapper[19803]: I0313 01:28:41.313744 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:41.349999 master-0 kubenswrapper[19803]: I0313 01:28:41.349914 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="62152c23-2f11-4929-9571-ca52520ff20a" Mar 13 01:28:41.349999 master-0 kubenswrapper[19803]: I0313 01:28:41.349984 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podUID="62152c23-2f11-4929-9571-ca52520ff20a" Mar 13 01:28:41.375468 master-0 kubenswrapper[19803]: I0313 01:28:41.374357 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 01:28:41.375468 master-0 kubenswrapper[19803]: I0313 01:28:41.375012 19803 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:41.380565 master-0 kubenswrapper[19803]: I0313 01:28:41.380165 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 01:28:41.394758 master-0 kubenswrapper[19803]: I0313 01:28:41.394557 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:41.401795 master-0 kubenswrapper[19803]: I0313 01:28:41.400089 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-master-0"] Mar 13 01:28:41.439883 master-0 kubenswrapper[19803]: W0313 01:28:41.439788 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod961b4d54fbc741f185dfae043b7eaea5.slice/crio-ee831cd26d713a3746c2587478abc7f7e9be11cf2c89580b7169202c056cf9de WatchSource:0}: Error finding container ee831cd26d713a3746c2587478abc7f7e9be11cf2c89580b7169202c056cf9de: Status 404 returned error can't find the container with id ee831cd26d713a3746c2587478abc7f7e9be11cf2c89580b7169202c056cf9de Mar 13 01:28:41.723182 master-0 kubenswrapper[19803]: I0313 01:28:41.723107 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"961b4d54fbc741f185dfae043b7eaea5","Type":"ContainerStarted","Data":"63e145e3803462c268c4e6910ed7dab92a1a6fa87fab40bca2c812d743e35288"} Mar 13 01:28:41.723182 master-0 kubenswrapper[19803]: I0313 01:28:41.723183 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"961b4d54fbc741f185dfae043b7eaea5","Type":"ContainerStarted","Data":"ee831cd26d713a3746c2587478abc7f7e9be11cf2c89580b7169202c056cf9de"} Mar 13 01:28:42.732419 master-0 kubenswrapper[19803]: I0313 01:28:42.732347 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"961b4d54fbc741f185dfae043b7eaea5","Type":"ContainerStarted","Data":"445df4ee399e1c487921c6af84169a294c88358c75f4e0027c6a5ee8e5122d7c"} Mar 13 01:28:42.732419 master-0 kubenswrapper[19803]: I0313 01:28:42.732405 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"961b4d54fbc741f185dfae043b7eaea5","Type":"ContainerStarted","Data":"f812298533dcce1c155ed1e03ffeaebfe3be9979c8c5b56601c50e53a63370ed"} Mar 13 01:28:42.732419 master-0 kubenswrapper[19803]: I0313 01:28:42.732414 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"961b4d54fbc741f185dfae043b7eaea5","Type":"ContainerStarted","Data":"ae5cfbbf713977df7ed93e5ff70ca598e851eafeacd8079a420fc10323afe487"} Mar 13 01:28:51.394910 master-0 kubenswrapper[19803]: I0313 01:28:51.394824 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:51.395808 master-0 kubenswrapper[19803]: I0313 01:28:51.395018 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:51.395808 master-0 kubenswrapper[19803]: I0313 01:28:51.395123 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:51.395808 master-0 kubenswrapper[19803]: I0313 01:28:51.395150 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:51.402347 master-0 kubenswrapper[19803]: I0313 01:28:51.402285 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:51.404552 master-0 kubenswrapper[19803]: I0313 01:28:51.404476 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:51.439333 master-0 kubenswrapper[19803]: I0313 01:28:51.439214 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" podStartSLOduration=10.439181001 podStartE2EDuration="10.439181001s" podCreationTimestamp="2026-03-13 01:28:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:28:42.754482333 +0000 UTC m=+670.719630012" watchObservedRunningTime="2026-03-13 01:28:51.439181001 +0000 UTC m=+679.404328720" Mar 13 01:28:51.817948 master-0 kubenswrapper[19803]: I0313 01:28:51.817863 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:51.819399 master-0 kubenswrapper[19803]: I0313 01:28:51.819327 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:28:55.382279 master-0 kubenswrapper[19803]: I0313 01:28:55.382197 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-master-0" Mar 13 01:29:16.236896 master-0 kubenswrapper[19803]: I0313 01:29:16.236749 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 01:29:16.237690 master-0 kubenswrapper[19803]: E0313 01:29:16.237062 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="943a993e-2a88-4bda-832f-d03e9d2d08d8" containerName="installer" Mar 13 01:29:16.237690 master-0 kubenswrapper[19803]: I0313 01:29:16.237078 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="943a993e-2a88-4bda-832f-d03e9d2d08d8" containerName="installer" Mar 13 01:29:16.237690 master-0 kubenswrapper[19803]: I0313 01:29:16.237238 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="943a993e-2a88-4bda-832f-d03e9d2d08d8" containerName="installer" Mar 13 01:29:16.237787 master-0 kubenswrapper[19803]: I0313 01:29:16.237751 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.240474 master-0 kubenswrapper[19803]: I0313 01:29:16.240427 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Mar 13 01:29:16.256387 master-0 kubenswrapper[19803]: I0313 01:29:16.256319 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 01:29:16.297099 master-0 kubenswrapper[19803]: I0313 01:29:16.297004 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.297435 master-0 kubenswrapper[19803]: I0313 01:29:16.297149 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-var-lock\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.297435 master-0 kubenswrapper[19803]: I0313 01:29:16.297220 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db457497-e2a1-4ca2-b518-6fa989ec866a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.398487 master-0 kubenswrapper[19803]: I0313 01:29:16.398363 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.398487 master-0 kubenswrapper[19803]: I0313 01:29:16.398509 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-var-lock\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.398959 master-0 kubenswrapper[19803]: I0313 01:29:16.398618 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db457497-e2a1-4ca2-b518-6fa989ec866a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.399191 master-0 kubenswrapper[19803]: I0313 01:29:16.399104 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-kubelet-dir\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.399325 master-0 kubenswrapper[19803]: I0313 01:29:16.399296 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-var-lock\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.419485 master-0 kubenswrapper[19803]: I0313 01:29:16.419425 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db457497-e2a1-4ca2-b518-6fa989ec866a-kube-api-access\") pod \"installer-2-master-0\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:16.554251 master-0 kubenswrapper[19803]: I0313 01:29:16.554118 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:17.014346 master-0 kubenswrapper[19803]: I0313 01:29:17.014267 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 01:29:17.034686 master-0 kubenswrapper[19803]: I0313 01:29:17.034630 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"db457497-e2a1-4ca2-b518-6fa989ec866a","Type":"ContainerStarted","Data":"a090c86a418b6b5215c04eab8dba0f03ac3409317a909be56ec91a2a0c55b8e2"} Mar 13 01:29:18.044488 master-0 kubenswrapper[19803]: I0313 01:29:18.044412 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"db457497-e2a1-4ca2-b518-6fa989ec866a","Type":"ContainerStarted","Data":"bc82b20e70bf6440a99ddccc7359c22b9b6ef6146b9f511b1039197b5aadf5e3"} Mar 13 01:29:18.072490 master-0 kubenswrapper[19803]: I0313 01:29:18.072375 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-2-master-0" podStartSLOduration=2.072351904 podStartE2EDuration="2.072351904s" podCreationTimestamp="2026-03-13 01:29:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:29:18.068936643 +0000 UTC m=+706.034084312" watchObservedRunningTime="2026-03-13 01:29:18.072351904 +0000 UTC m=+706.037499583" Mar 13 01:29:31.443651 master-0 kubenswrapper[19803]: I0313 01:29:31.443493 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 01:29:31.444801 master-0 kubenswrapper[19803]: I0313 01:29:31.443928 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/installer-2-master-0" podUID="db457497-e2a1-4ca2-b518-6fa989ec866a" containerName="installer" containerID="cri-o://bc82b20e70bf6440a99ddccc7359c22b9b6ef6146b9f511b1039197b5aadf5e3" gracePeriod=30 Mar 13 01:29:34.816595 master-0 kubenswrapper[19803]: I0313 01:29:34.816458 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 01:29:34.818101 master-0 kubenswrapper[19803]: I0313 01:29:34.818045 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.855947 master-0 kubenswrapper[19803]: I0313 01:29:34.855855 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6481abb4-a276-4bf1-b16b-271e2ce7936e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.855947 master-0 kubenswrapper[19803]: I0313 01:29:34.855949 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-var-lock\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.856429 master-0 kubenswrapper[19803]: I0313 01:29:34.856037 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.896123 master-0 kubenswrapper[19803]: I0313 01:29:34.896028 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 01:29:34.957987 master-0 kubenswrapper[19803]: I0313 01:29:34.957840 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6481abb4-a276-4bf1-b16b-271e2ce7936e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.957987 master-0 kubenswrapper[19803]: I0313 01:29:34.957938 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-var-lock\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.958435 master-0 kubenswrapper[19803]: I0313 01:29:34.958031 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.958435 master-0 kubenswrapper[19803]: I0313 01:29:34.958180 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-kubelet-dir\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.958879 master-0 kubenswrapper[19803]: I0313 01:29:34.958816 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-var-lock\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:34.996286 master-0 kubenswrapper[19803]: I0313 01:29:34.996195 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6481abb4-a276-4bf1-b16b-271e2ce7936e-kube-api-access\") pod \"installer-3-master-0\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:35.164522 master-0 kubenswrapper[19803]: I0313 01:29:35.164326 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:29:35.639364 master-0 kubenswrapper[19803]: I0313 01:29:35.638700 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-3-master-0"] Mar 13 01:29:35.655144 master-0 kubenswrapper[19803]: W0313 01:29:35.654841 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod6481abb4_a276_4bf1_b16b_271e2ce7936e.slice/crio-995db7272490ffcf7c7982bc7c132f32c3f7d8f1ed8b3276eb4e042b36b695df WatchSource:0}: Error finding container 995db7272490ffcf7c7982bc7c132f32c3f7d8f1ed8b3276eb4e042b36b695df: Status 404 returned error can't find the container with id 995db7272490ffcf7c7982bc7c132f32c3f7d8f1ed8b3276eb4e042b36b695df Mar 13 01:29:36.222539 master-0 kubenswrapper[19803]: I0313 01:29:36.221677 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"6481abb4-a276-4bf1-b16b-271e2ce7936e","Type":"ContainerStarted","Data":"11e9dc1615ec790861e827e67806dbd0152ceacafc8ec3db64b69eee9e16580e"} Mar 13 01:29:36.222539 master-0 kubenswrapper[19803]: I0313 01:29:36.221733 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"6481abb4-a276-4bf1-b16b-271e2ce7936e","Type":"ContainerStarted","Data":"995db7272490ffcf7c7982bc7c132f32c3f7d8f1ed8b3276eb4e042b36b695df"} Mar 13 01:29:36.248326 master-0 kubenswrapper[19803]: I0313 01:29:36.248206 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-3-master-0" podStartSLOduration=2.248183429 podStartE2EDuration="2.248183429s" podCreationTimestamp="2026-03-13 01:29:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:29:36.245372862 +0000 UTC m=+724.210520561" watchObservedRunningTime="2026-03-13 01:29:36.248183429 +0000 UTC m=+724.213331108" Mar 13 01:29:48.329082 master-0 kubenswrapper[19803]: I0313 01:29:48.328880 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_db457497-e2a1-4ca2-b518-6fa989ec866a/installer/0.log" Mar 13 01:29:48.329082 master-0 kubenswrapper[19803]: I0313 01:29:48.328942 19803 generic.go:334] "Generic (PLEG): container finished" podID="db457497-e2a1-4ca2-b518-6fa989ec866a" containerID="bc82b20e70bf6440a99ddccc7359c22b9b6ef6146b9f511b1039197b5aadf5e3" exitCode=1 Mar 13 01:29:48.330073 master-0 kubenswrapper[19803]: I0313 01:29:48.329283 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"db457497-e2a1-4ca2-b518-6fa989ec866a","Type":"ContainerDied","Data":"bc82b20e70bf6440a99ddccc7359c22b9b6ef6146b9f511b1039197b5aadf5e3"} Mar 13 01:29:48.667310 master-0 kubenswrapper[19803]: I0313 01:29:48.667238 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_db457497-e2a1-4ca2-b518-6fa989ec866a/installer/0.log" Mar 13 01:29:48.667661 master-0 kubenswrapper[19803]: I0313 01:29:48.667368 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:48.791155 master-0 kubenswrapper[19803]: I0313 01:29:48.791103 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db457497-e2a1-4ca2-b518-6fa989ec866a-kube-api-access\") pod \"db457497-e2a1-4ca2-b518-6fa989ec866a\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " Mar 13 01:29:48.791399 master-0 kubenswrapper[19803]: I0313 01:29:48.791191 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-var-lock\") pod \"db457497-e2a1-4ca2-b518-6fa989ec866a\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " Mar 13 01:29:48.791399 master-0 kubenswrapper[19803]: I0313 01:29:48.791279 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-kubelet-dir\") pod \"db457497-e2a1-4ca2-b518-6fa989ec866a\" (UID: \"db457497-e2a1-4ca2-b518-6fa989ec866a\") " Mar 13 01:29:48.791577 master-0 kubenswrapper[19803]: I0313 01:29:48.791442 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "db457497-e2a1-4ca2-b518-6fa989ec866a" (UID: "db457497-e2a1-4ca2-b518-6fa989ec866a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:29:48.791577 master-0 kubenswrapper[19803]: I0313 01:29:48.791471 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-var-lock" (OuterVolumeSpecName: "var-lock") pod "db457497-e2a1-4ca2-b518-6fa989ec866a" (UID: "db457497-e2a1-4ca2-b518-6fa989ec866a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:29:48.792067 master-0 kubenswrapper[19803]: I0313 01:29:48.792019 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:29:48.792067 master-0 kubenswrapper[19803]: I0313 01:29:48.792059 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/db457497-e2a1-4ca2-b518-6fa989ec866a-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:29:48.796013 master-0 kubenswrapper[19803]: I0313 01:29:48.795943 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db457497-e2a1-4ca2-b518-6fa989ec866a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "db457497-e2a1-4ca2-b518-6fa989ec866a" (UID: "db457497-e2a1-4ca2-b518-6fa989ec866a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:29:48.893302 master-0 kubenswrapper[19803]: I0313 01:29:48.893103 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/db457497-e2a1-4ca2-b518-6fa989ec866a-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:29:49.342453 master-0 kubenswrapper[19803]: I0313 01:29:49.342358 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-2-master-0_db457497-e2a1-4ca2-b518-6fa989ec866a/installer/0.log" Mar 13 01:29:49.343648 master-0 kubenswrapper[19803]: I0313 01:29:49.342479 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-2-master-0" event={"ID":"db457497-e2a1-4ca2-b518-6fa989ec866a","Type":"ContainerDied","Data":"a090c86a418b6b5215c04eab8dba0f03ac3409317a909be56ec91a2a0c55b8e2"} Mar 13 01:29:49.343648 master-0 kubenswrapper[19803]: I0313 01:29:49.342597 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-2-master-0" Mar 13 01:29:49.343648 master-0 kubenswrapper[19803]: I0313 01:29:49.342626 19803 scope.go:117] "RemoveContainer" containerID="bc82b20e70bf6440a99ddccc7359c22b9b6ef6146b9f511b1039197b5aadf5e3" Mar 13 01:29:49.396278 master-0 kubenswrapper[19803]: I0313 01:29:49.396105 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 01:29:49.411774 master-0 kubenswrapper[19803]: I0313 01:29:49.411688 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/installer-2-master-0"] Mar 13 01:29:50.329955 master-0 kubenswrapper[19803]: I0313 01:29:50.329801 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db457497-e2a1-4ca2-b518-6fa989ec866a" path="/var/lib/kubelet/pods/db457497-e2a1-4ca2-b518-6fa989ec866a/volumes" Mar 13 01:29:55.017281 master-0 kubenswrapper[19803]: I0313 01:29:55.017179 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:29:55.022464 master-0 kubenswrapper[19803]: I0313 01:29:55.022406 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"installer-1-master-0\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " pod="openshift-kube-apiserver/installer-1-master-0" Mar 13 01:29:55.118129 master-0 kubenswrapper[19803]: I0313 01:29:55.118058 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") pod \"fdcd8438-d33f-490f-a841-8944c58506f8\" (UID: \"fdcd8438-d33f-490f-a841-8944c58506f8\") " Mar 13 01:29:55.122761 master-0 kubenswrapper[19803]: I0313 01:29:55.122692 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "fdcd8438-d33f-490f-a841-8944c58506f8" (UID: "fdcd8438-d33f-490f-a841-8944c58506f8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:29:55.219493 master-0 kubenswrapper[19803]: I0313 01:29:55.219395 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/fdcd8438-d33f-490f-a841-8944c58506f8-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:30:33.916196 master-0 kubenswrapper[19803]: I0313 01:30:33.916139 19803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 01:30:33.917202 master-0 kubenswrapper[19803]: I0313 01:30:33.917172 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" containerID="cri-o://3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6" gracePeriod=15 Mar 13 01:30:33.917321 master-0 kubenswrapper[19803]: I0313 01:30:33.917247 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3" gracePeriod=15 Mar 13 01:30:33.917381 master-0 kubenswrapper[19803]: I0313 01:30:33.917334 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702" gracePeriod=15 Mar 13 01:30:33.917474 master-0 kubenswrapper[19803]: I0313 01:30:33.917400 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" containerID="cri-o://ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918" gracePeriod=15 Mar 13 01:30:33.917540 master-0 kubenswrapper[19803]: I0313 01:30:33.917418 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" containerID="cri-o://6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3" gracePeriod=15 Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918107 19803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: E0313 01:30:33.918416 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="setup" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918430 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="setup" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: E0313 01:30:33.918451 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918461 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: E0313 01:30:33.918525 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db457497-e2a1-4ca2-b518-6fa989ec866a" containerName="installer" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918540 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="db457497-e2a1-4ca2-b518-6fa989ec866a" containerName="installer" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: E0313 01:30:33.918568 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918586 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: E0313 01:30:33.918612 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918621 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: E0313 01:30:33.918641 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918650 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: E0313 01:30:33.918678 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918687 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918832 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-syncer" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918863 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 01:30:33.918879 master-0 kubenswrapper[19803]: I0313 01:30:33.918888 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver" Mar 13 01:30:33.919728 master-0 kubenswrapper[19803]: I0313 01:30:33.918905 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 01:30:33.919728 master-0 kubenswrapper[19803]: I0313 01:30:33.918919 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-insecure-readyz" Mar 13 01:30:33.919728 master-0 kubenswrapper[19803]: I0313 01:30:33.918942 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="db457497-e2a1-4ca2-b518-6fa989ec866a" containerName="installer" Mar 13 01:30:33.919728 master-0 kubenswrapper[19803]: I0313 01:30:33.918953 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-check-endpoints" Mar 13 01:30:33.919934 master-0 kubenswrapper[19803]: E0313 01:30:33.919836 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 01:30:33.919934 master-0 kubenswrapper[19803]: I0313 01:30:33.919863 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerName="kube-apiserver-cert-regeneration-controller" Mar 13 01:30:33.923455 master-0 kubenswrapper[19803]: I0313 01:30:33.923399 19803 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 01:30:33.924659 master-0 kubenswrapper[19803]: I0313 01:30:33.924641 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:33.929064 master-0 kubenswrapper[19803]: I0313 01:30:33.929007 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" podUID="077dd10388b9e3e48a07382126e86621" Mar 13 01:30:33.938072 master-0 kubenswrapper[19803]: I0313 01:30:33.938013 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:33.941285 master-0 kubenswrapper[19803]: I0313 01:30:33.941142 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:33.942638 master-0 kubenswrapper[19803]: I0313 01:30:33.941862 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:33.942748 master-0 kubenswrapper[19803]: I0313 01:30:33.942706 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:33.942833 master-0 kubenswrapper[19803]: I0313 01:30:33.942810 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:33.942928 master-0 kubenswrapper[19803]: I0313 01:30:33.942851 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:33.943011 master-0 kubenswrapper[19803]: I0313 01:30:33.942938 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:33.943097 master-0 kubenswrapper[19803]: I0313 01:30:33.943079 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:33.986448 master-0 kubenswrapper[19803]: E0313 01:30:33.986290 19803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189c42776999ceae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:cdcecc61ff5eeb08bd2a3ac12599e4f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:30:33.917361838 +0000 UTC m=+781.882509547,LastTimestamp:2026-03-13 01:30:33.917361838 +0000 UTC m=+781.882509547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:30:34.043794 master-0 kubenswrapper[19803]: E0313 01:30:34.043738 19803 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044394 master-0 kubenswrapper[19803]: I0313 01:30:34.044364 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044500 master-0 kubenswrapper[19803]: I0313 01:30:34.044414 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044500 master-0 kubenswrapper[19803]: I0313 01:30:34.044452 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:34.044665 master-0 kubenswrapper[19803]: I0313 01:30:34.044610 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044812 master-0 kubenswrapper[19803]: I0313 01:30:34.044619 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044812 master-0 kubenswrapper[19803]: I0313 01:30:34.044693 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044812 master-0 kubenswrapper[19803]: I0313 01:30:34.044718 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044812 master-0 kubenswrapper[19803]: I0313 01:30:34.044742 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044812 master-0 kubenswrapper[19803]: I0313 01:30:34.044776 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.044812 master-0 kubenswrapper[19803]: I0313 01:30:34.044695 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-cert-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:34.044812 master-0 kubenswrapper[19803]: I0313 01:30:34.044807 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:34.045078 master-0 kubenswrapper[19803]: I0313 01:30:34.044862 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.045078 master-0 kubenswrapper[19803]: I0313 01:30:34.044861 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"kube-apiserver-startup-monitor-master-0\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.045078 master-0 kubenswrapper[19803]: I0313 01:30:34.044863 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:34.045078 master-0 kubenswrapper[19803]: I0313 01:30:34.044909 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-resource-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:34.045078 master-0 kubenswrapper[19803]: I0313 01:30:34.044885 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/077dd10388b9e3e48a07382126e86621-audit-dir\") pod \"kube-apiserver-master-0\" (UID: \"077dd10388b9e3e48a07382126e86621\") " pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:34.345200 master-0 kubenswrapper[19803]: I0313 01:30:34.345118 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.381266 master-0 kubenswrapper[19803]: W0313 01:30:34.381152 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod899242a15b2bdf3b4a04fb323647ca94.slice/crio-1a3b32f72e69050646bbb5a8df8d2b3a88bb3ec8b808f1b5c7974317b9c2c8c2 WatchSource:0}: Error finding container 1a3b32f72e69050646bbb5a8df8d2b3a88bb3ec8b808f1b5c7974317b9c2c8c2: Status 404 returned error can't find the container with id 1a3b32f72e69050646bbb5a8df8d2b3a88bb3ec8b808f1b5c7974317b9c2c8c2 Mar 13 01:30:34.801380 master-0 kubenswrapper[19803]: I0313 01:30:34.801284 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"6a0ea16b4eddf0e26b82f667b75df39ac5650bd6ba07f40d5048e4ffe6bf4805"} Mar 13 01:30:34.801727 master-0 kubenswrapper[19803]: I0313 01:30:34.801395 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" event={"ID":"899242a15b2bdf3b4a04fb323647ca94","Type":"ContainerStarted","Data":"1a3b32f72e69050646bbb5a8df8d2b3a88bb3ec8b808f1b5c7974317b9c2c8c2"} Mar 13 01:30:34.803311 master-0 kubenswrapper[19803]: E0313 01:30:34.803215 19803 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:30:34.803491 master-0 kubenswrapper[19803]: I0313 01:30:34.803426 19803 generic.go:334] "Generic (PLEG): container finished" podID="6481abb4-a276-4bf1-b16b-271e2ce7936e" containerID="11e9dc1615ec790861e827e67806dbd0152ceacafc8ec3db64b69eee9e16580e" exitCode=0 Mar 13 01:30:34.803666 master-0 kubenswrapper[19803]: I0313 01:30:34.803619 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"6481abb4-a276-4bf1-b16b-271e2ce7936e","Type":"ContainerDied","Data":"11e9dc1615ec790861e827e67806dbd0152ceacafc8ec3db64b69eee9e16580e"} Mar 13 01:30:34.805627 master-0 kubenswrapper[19803]: I0313 01:30:34.805501 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:34.808544 master-0 kubenswrapper[19803]: I0313 01:30:34.808472 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-cert-syncer/0.log" Mar 13 01:30:34.809682 master-0 kubenswrapper[19803]: I0313 01:30:34.809642 19803 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3" exitCode=0 Mar 13 01:30:34.809682 master-0 kubenswrapper[19803]: I0313 01:30:34.809669 19803 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918" exitCode=0 Mar 13 01:30:34.809682 master-0 kubenswrapper[19803]: I0313 01:30:34.809682 19803 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702" exitCode=0 Mar 13 01:30:34.809989 master-0 kubenswrapper[19803]: I0313 01:30:34.809693 19803 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3" exitCode=2 Mar 13 01:30:34.809989 master-0 kubenswrapper[19803]: I0313 01:30:34.809799 19803 scope.go:117] "RemoveContainer" containerID="1671c753884a85b9d5990bcf5a091faa5ed2c13052477fadfd66f9da210dc6ae" Mar 13 01:30:35.826423 master-0 kubenswrapper[19803]: I0313 01:30:35.826239 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-cert-syncer/0.log" Mar 13 01:30:36.388629 master-0 kubenswrapper[19803]: I0313 01:30:36.388391 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:30:36.389795 master-0 kubenswrapper[19803]: I0313 01:30:36.389702 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:36.395072 master-0 kubenswrapper[19803]: I0313 01:30:36.395029 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-cert-syncer/0.log" Mar 13 01:30:36.395958 master-0 kubenswrapper[19803]: I0313 01:30:36.395919 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:36.396938 master-0 kubenswrapper[19803]: I0313 01:30:36.396869 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:36.397592 master-0 kubenswrapper[19803]: I0313 01:30:36.397528 19803 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:36.439488 master-0 kubenswrapper[19803]: I0313 01:30:36.439420 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") pod \"cdcecc61ff5eeb08bd2a3ac12599e4f9\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " Mar 13 01:30:36.439488 master-0 kubenswrapper[19803]: I0313 01:30:36.439502 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6481abb4-a276-4bf1-b16b-271e2ce7936e-kube-api-access\") pod \"6481abb4-a276-4bf1-b16b-271e2ce7936e\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439563 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-kubelet-dir\") pod \"6481abb4-a276-4bf1-b16b-271e2ce7936e\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439552 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "cdcecc61ff5eeb08bd2a3ac12599e4f9" (UID: "cdcecc61ff5eeb08bd2a3ac12599e4f9"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439615 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") pod \"cdcecc61ff5eeb08bd2a3ac12599e4f9\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439636 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "6481abb4-a276-4bf1-b16b-271e2ce7936e" (UID: "6481abb4-a276-4bf1-b16b-271e2ce7936e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439654 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-var-lock\") pod \"6481abb4-a276-4bf1-b16b-271e2ce7936e\" (UID: \"6481abb4-a276-4bf1-b16b-271e2ce7936e\") " Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439670 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "cdcecc61ff5eeb08bd2a3ac12599e4f9" (UID: "cdcecc61ff5eeb08bd2a3ac12599e4f9"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439710 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") pod \"cdcecc61ff5eeb08bd2a3ac12599e4f9\" (UID: \"cdcecc61ff5eeb08bd2a3ac12599e4f9\") " Mar 13 01:30:36.439809 master-0 kubenswrapper[19803]: I0313 01:30:36.439792 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-var-lock" (OuterVolumeSpecName: "var-lock") pod "6481abb4-a276-4bf1-b16b-271e2ce7936e" (UID: "6481abb4-a276-4bf1-b16b-271e2ce7936e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:30:36.440161 master-0 kubenswrapper[19803]: I0313 01:30:36.439912 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "cdcecc61ff5eeb08bd2a3ac12599e4f9" (UID: "cdcecc61ff5eeb08bd2a3ac12599e4f9"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:30:36.440161 master-0 kubenswrapper[19803]: I0313 01:30:36.440020 19803 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-kubelet-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:30:36.440161 master-0 kubenswrapper[19803]: I0313 01:30:36.440045 19803 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-audit-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:30:36.440161 master-0 kubenswrapper[19803]: I0313 01:30:36.440063 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/6481abb4-a276-4bf1-b16b-271e2ce7936e-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:30:36.440161 master-0 kubenswrapper[19803]: I0313 01:30:36.440080 19803 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-cert-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:30:36.440161 master-0 kubenswrapper[19803]: I0313 01:30:36.440097 19803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/cdcecc61ff5eeb08bd2a3ac12599e4f9-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:30:36.442469 master-0 kubenswrapper[19803]: I0313 01:30:36.442422 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6481abb4-a276-4bf1-b16b-271e2ce7936e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "6481abb4-a276-4bf1-b16b-271e2ce7936e" (UID: "6481abb4-a276-4bf1-b16b-271e2ce7936e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:30:36.542023 master-0 kubenswrapper[19803]: I0313 01:30:36.541932 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/6481abb4-a276-4bf1-b16b-271e2ce7936e-kube-api-access\") on node \"master-0\" DevicePath \"\"" Mar 13 01:30:36.838466 master-0 kubenswrapper[19803]: I0313 01:30:36.838415 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_cdcecc61ff5eeb08bd2a3ac12599e4f9/kube-apiserver-cert-syncer/0.log" Mar 13 01:30:36.839306 master-0 kubenswrapper[19803]: I0313 01:30:36.839269 19803 generic.go:334] "Generic (PLEG): container finished" podID="cdcecc61ff5eeb08bd2a3ac12599e4f9" containerID="3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6" exitCode=0 Mar 13 01:30:36.839391 master-0 kubenswrapper[19803]: I0313 01:30:36.839369 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:36.839460 master-0 kubenswrapper[19803]: I0313 01:30:36.839441 19803 scope.go:117] "RemoveContainer" containerID="e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3" Mar 13 01:30:36.841728 master-0 kubenswrapper[19803]: I0313 01:30:36.841686 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-3-master-0" event={"ID":"6481abb4-a276-4bf1-b16b-271e2ce7936e","Type":"ContainerDied","Data":"995db7272490ffcf7c7982bc7c132f32c3f7d8f1ed8b3276eb4e042b36b695df"} Mar 13 01:30:36.841801 master-0 kubenswrapper[19803]: I0313 01:30:36.841740 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="995db7272490ffcf7c7982bc7c132f32c3f7d8f1ed8b3276eb4e042b36b695df" Mar 13 01:30:36.841845 master-0 kubenswrapper[19803]: I0313 01:30:36.841800 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-3-master-0" Mar 13 01:30:36.865454 master-0 kubenswrapper[19803]: I0313 01:30:36.865390 19803 scope.go:117] "RemoveContainer" containerID="ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918" Mar 13 01:30:36.866948 master-0 kubenswrapper[19803]: I0313 01:30:36.866862 19803 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:36.867650 master-0 kubenswrapper[19803]: I0313 01:30:36.867596 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:36.868462 master-0 kubenswrapper[19803]: I0313 01:30:36.868408 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:36.869271 master-0 kubenswrapper[19803]: I0313 01:30:36.869194 19803 status_manager.go:851] "Failed to get status for pod" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" pod="openshift-kube-apiserver/kube-apiserver-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:36.888797 master-0 kubenswrapper[19803]: I0313 01:30:36.888738 19803 scope.go:117] "RemoveContainer" containerID="70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702" Mar 13 01:30:36.918171 master-0 kubenswrapper[19803]: I0313 01:30:36.918098 19803 scope.go:117] "RemoveContainer" containerID="6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3" Mar 13 01:30:36.946912 master-0 kubenswrapper[19803]: I0313 01:30:36.946756 19803 scope.go:117] "RemoveContainer" containerID="3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6" Mar 13 01:30:36.974415 master-0 kubenswrapper[19803]: I0313 01:30:36.974338 19803 scope.go:117] "RemoveContainer" containerID="ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89" Mar 13 01:30:36.999549 master-0 kubenswrapper[19803]: I0313 01:30:36.998257 19803 scope.go:117] "RemoveContainer" containerID="e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3" Mar 13 01:30:37.000462 master-0 kubenswrapper[19803]: E0313 01:30:36.999550 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3\": container with ID starting with e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3 not found: ID does not exist" containerID="e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3" Mar 13 01:30:37.000462 master-0 kubenswrapper[19803]: I0313 01:30:36.999631 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3"} err="failed to get container status \"e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3\": rpc error: code = NotFound desc = could not find container \"e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3\": container with ID starting with e2c0010adf37f61bb45d07ae7ee32855560c2ea4517ba28398a6770f396f6fb3 not found: ID does not exist" Mar 13 01:30:37.000462 master-0 kubenswrapper[19803]: I0313 01:30:36.999678 19803 scope.go:117] "RemoveContainer" containerID="ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918" Mar 13 01:30:37.000462 master-0 kubenswrapper[19803]: E0313 01:30:37.000299 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918\": container with ID starting with ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918 not found: ID does not exist" containerID="ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918" Mar 13 01:30:37.000462 master-0 kubenswrapper[19803]: I0313 01:30:37.000374 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918"} err="failed to get container status \"ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918\": rpc error: code = NotFound desc = could not find container \"ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918\": container with ID starting with ebf8bbc4c3c3c884e529c16c12fb27c7a9049b261b3870c300dd8c48de6c3918 not found: ID does not exist" Mar 13 01:30:37.000462 master-0 kubenswrapper[19803]: I0313 01:30:37.000429 19803 scope.go:117] "RemoveContainer" containerID="70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702" Mar 13 01:30:37.000884 master-0 kubenswrapper[19803]: E0313 01:30:37.000839 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702\": container with ID starting with 70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702 not found: ID does not exist" containerID="70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702" Mar 13 01:30:37.001010 master-0 kubenswrapper[19803]: I0313 01:30:37.000883 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702"} err="failed to get container status \"70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702\": rpc error: code = NotFound desc = could not find container \"70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702\": container with ID starting with 70c06634a0d4a3fc94ab94285054c3faad4af8e8576aa3ef8dd31d2c0070a702 not found: ID does not exist" Mar 13 01:30:37.001010 master-0 kubenswrapper[19803]: I0313 01:30:37.000907 19803 scope.go:117] "RemoveContainer" containerID="6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3" Mar 13 01:30:37.001798 master-0 kubenswrapper[19803]: E0313 01:30:37.001362 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3\": container with ID starting with 6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3 not found: ID does not exist" containerID="6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3" Mar 13 01:30:37.001798 master-0 kubenswrapper[19803]: I0313 01:30:37.001455 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3"} err="failed to get container status \"6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3\": rpc error: code = NotFound desc = could not find container \"6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3\": container with ID starting with 6bdfb223b506d129f7810a5bdff1788db2c8a87194f7a643ab4d1467a1b50ed3 not found: ID does not exist" Mar 13 01:30:37.001798 master-0 kubenswrapper[19803]: I0313 01:30:37.001486 19803 scope.go:117] "RemoveContainer" containerID="3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6" Mar 13 01:30:37.002730 master-0 kubenswrapper[19803]: E0313 01:30:37.001955 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6\": container with ID starting with 3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6 not found: ID does not exist" containerID="3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6" Mar 13 01:30:37.002730 master-0 kubenswrapper[19803]: I0313 01:30:37.002001 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6"} err="failed to get container status \"3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6\": rpc error: code = NotFound desc = could not find container \"3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6\": container with ID starting with 3bf146d3f17bc0f68876989e45f2e006a250d2f2f9373ddd89eb9af5dfb2cbb6 not found: ID does not exist" Mar 13 01:30:37.002730 master-0 kubenswrapper[19803]: I0313 01:30:37.002032 19803 scope.go:117] "RemoveContainer" containerID="ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89" Mar 13 01:30:37.002730 master-0 kubenswrapper[19803]: E0313 01:30:37.002358 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89\": container with ID starting with ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89 not found: ID does not exist" containerID="ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89" Mar 13 01:30:37.002730 master-0 kubenswrapper[19803]: I0313 01:30:37.002401 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89"} err="failed to get container status \"ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89\": rpc error: code = NotFound desc = could not find container \"ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89\": container with ID starting with ca98066b3812c38eefbdd162dfa6b89db13ab03f7890eba8df49177c332fbe89 not found: ID does not exist" Mar 13 01:30:38.325040 master-0 kubenswrapper[19803]: I0313 01:30:38.324939 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdcecc61ff5eeb08bd2a3ac12599e4f9" path="/var/lib/kubelet/pods/cdcecc61ff5eeb08bd2a3ac12599e4f9/volumes" Mar 13 01:30:40.724891 master-0 kubenswrapper[19803]: E0313 01:30:40.724813 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:40.726178 master-0 kubenswrapper[19803]: E0313 01:30:40.726110 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:40.727090 master-0 kubenswrapper[19803]: E0313 01:30:40.727034 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:40.728022 master-0 kubenswrapper[19803]: E0313 01:30:40.727952 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:40.728792 master-0 kubenswrapper[19803]: E0313 01:30:40.728742 19803 controller.go:195] "Failed to update lease" err="Put \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:40.728849 master-0 kubenswrapper[19803]: I0313 01:30:40.728796 19803 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Mar 13 01:30:40.729782 master-0 kubenswrapper[19803]: E0313 01:30:40.729731 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="200ms" Mar 13 01:30:40.931770 master-0 kubenswrapper[19803]: E0313 01:30:40.931654 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="400ms" Mar 13 01:30:41.334148 master-0 kubenswrapper[19803]: E0313 01:30:41.334017 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="800ms" Mar 13 01:30:41.523168 master-0 kubenswrapper[19803]: E0313 01:30:41.522853 19803 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 192.168.32.10:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-master-0.189c42776999ceae openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-master-0,UID:cdcecc61ff5eeb08bd2a3ac12599e4f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Killing,Message:Stopping container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:master-0,},FirstTimestamp:2026-03-13 01:30:33.917361838 +0000 UTC m=+781.882509547,LastTimestamp:2026-03-13 01:30:33.917361838 +0000 UTC m=+781.882509547,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:master-0,}" Mar 13 01:30:42.136263 master-0 kubenswrapper[19803]: E0313 01:30:42.136049 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="1.6s" Mar 13 01:30:42.324335 master-0 kubenswrapper[19803]: I0313 01:30:42.324218 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:43.737991 master-0 kubenswrapper[19803]: E0313 01:30:43.737878 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="3.2s" Mar 13 01:30:46.314670 master-0 kubenswrapper[19803]: I0313 01:30:46.314432 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:46.316836 master-0 kubenswrapper[19803]: I0313 01:30:46.316717 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:46.356308 master-0 kubenswrapper[19803]: I0313 01:30:46.356163 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:46.356308 master-0 kubenswrapper[19803]: I0313 01:30:46.356215 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:46.357704 master-0 kubenswrapper[19803]: E0313 01:30:46.357570 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:46.359668 master-0 kubenswrapper[19803]: I0313 01:30:46.359337 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:46.396160 master-0 kubenswrapper[19803]: W0313 01:30:46.396061 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod077dd10388b9e3e48a07382126e86621.slice/crio-81bace4a4b3f020ae336f0dabf0560d413a755f81781a2df9ad158c8c2b1cb77 WatchSource:0}: Error finding container 81bace4a4b3f020ae336f0dabf0560d413a755f81781a2df9ad158c8c2b1cb77: Status 404 returned error can't find the container with id 81bace4a4b3f020ae336f0dabf0560d413a755f81781a2df9ad158c8c2b1cb77 Mar 13 01:30:46.945997 master-0 kubenswrapper[19803]: E0313 01:30:46.945603 19803 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.sno.openstack.lab:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/master-0?timeout=10s\": dial tcp 192.168.32.10:6443: connect: connection refused" interval="6.4s" Mar 13 01:30:46.958158 master-0 kubenswrapper[19803]: I0313 01:30:46.958067 19803 generic.go:334] "Generic (PLEG): container finished" podID="077dd10388b9e3e48a07382126e86621" containerID="a992c2b12ddea5a01d482f975025f76ab605c64e21f3f8254531c9c593a0f515" exitCode=0 Mar 13 01:30:46.958370 master-0 kubenswrapper[19803]: I0313 01:30:46.958161 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerDied","Data":"a992c2b12ddea5a01d482f975025f76ab605c64e21f3f8254531c9c593a0f515"} Mar 13 01:30:46.958370 master-0 kubenswrapper[19803]: I0313 01:30:46.958240 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"81bace4a4b3f020ae336f0dabf0560d413a755f81781a2df9ad158c8c2b1cb77"} Mar 13 01:30:46.958853 master-0 kubenswrapper[19803]: I0313 01:30:46.958803 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:46.959042 master-0 kubenswrapper[19803]: I0313 01:30:46.958855 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:46.960016 master-0 kubenswrapper[19803]: E0313 01:30:46.959943 19803 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:46.960123 master-0 kubenswrapper[19803]: I0313 01:30:46.959974 19803 status_manager.go:851] "Failed to get status for pod" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" pod="openshift-kube-apiserver/installer-3-master-0" err="Get \"https://api-int.sno.openstack.lab:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-3-master-0\": dial tcp 192.168.32.10:6443: connect: connection refused" Mar 13 01:30:47.976207 master-0 kubenswrapper[19803]: I0313 01:30:47.976101 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"8bc391311ffe241a58346caed21b20668d160a355e977f4e4cb090f7d969765a"} Mar 13 01:30:47.976744 master-0 kubenswrapper[19803]: I0313 01:30:47.976203 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"3840e68e0d8c8d542c3549494e5b5174b2b3d862102bee55603f37b836fd4f12"} Mar 13 01:30:47.976744 master-0 kubenswrapper[19803]: I0313 01:30:47.976241 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"4c40de6901215b2b64c3a4a40b909655ed97e59d9dd3f839620efe94f72c5501"} Mar 13 01:30:47.981086 master-0 kubenswrapper[19803]: I0313 01:30:47.981052 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_961b4d54fbc741f185dfae043b7eaea5/kube-controller-manager/0.log" Mar 13 01:30:47.981164 master-0 kubenswrapper[19803]: I0313 01:30:47.981135 19803 generic.go:334] "Generic (PLEG): container finished" podID="961b4d54fbc741f185dfae043b7eaea5" containerID="63e145e3803462c268c4e6910ed7dab92a1a6fa87fab40bca2c812d743e35288" exitCode=1 Mar 13 01:30:47.981240 master-0 kubenswrapper[19803]: I0313 01:30:47.981170 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"961b4d54fbc741f185dfae043b7eaea5","Type":"ContainerDied","Data":"63e145e3803462c268c4e6910ed7dab92a1a6fa87fab40bca2c812d743e35288"} Mar 13 01:30:47.982116 master-0 kubenswrapper[19803]: I0313 01:30:47.982087 19803 scope.go:117] "RemoveContainer" containerID="63e145e3803462c268c4e6910ed7dab92a1a6fa87fab40bca2c812d743e35288" Mar 13 01:30:48.992694 master-0 kubenswrapper[19803]: I0313 01:30:48.992643 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_961b4d54fbc741f185dfae043b7eaea5/kube-controller-manager/0.log" Mar 13 01:30:48.993214 master-0 kubenswrapper[19803]: I0313 01:30:48.992818 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" event={"ID":"961b4d54fbc741f185dfae043b7eaea5","Type":"ContainerStarted","Data":"fc74ba5ac0e075e89cf6460fb1a804339ca8873e9922f77f078161a19d748189"} Mar 13 01:30:48.997582 master-0 kubenswrapper[19803]: I0313 01:30:48.997541 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"647e9f6ed545a3285389187df7c400205d710baa5cf70cbdf15fd4ab6a9ee2ce"} Mar 13 01:30:48.997692 master-0 kubenswrapper[19803]: I0313 01:30:48.997678 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" event={"ID":"077dd10388b9e3e48a07382126e86621","Type":"ContainerStarted","Data":"dc3f658e2b1696e34e6475b2d1ec4b378fee50480c885f851c4477aa542396be"} Mar 13 01:30:48.997872 master-0 kubenswrapper[19803]: I0313 01:30:48.997827 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:48.997976 master-0 kubenswrapper[19803]: I0313 01:30:48.997935 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:48.997976 master-0 kubenswrapper[19803]: I0313 01:30:48.997976 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:51.360147 master-0 kubenswrapper[19803]: I0313 01:30:51.360019 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:51.361554 master-0 kubenswrapper[19803]: I0313 01:30:51.360541 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:51.368818 master-0 kubenswrapper[19803]: I0313 01:30:51.368747 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:51.395889 master-0 kubenswrapper[19803]: I0313 01:30:51.395749 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:30:51.396407 master-0 kubenswrapper[19803]: I0313 01:30:51.396271 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:30:51.403367 master-0 kubenswrapper[19803]: I0313 01:30:51.403290 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:30:54.032806 master-0 kubenswrapper[19803]: I0313 01:30:54.032717 19803 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:30:54.153269 master-0 kubenswrapper[19803]: I0313 01:30:54.153186 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="077dd10388b9e3e48a07382126e86621" podUID="e3798efa-4e15-463a-9528-f8868764ddbd" Mar 13 01:30:55.073347 master-0 kubenswrapper[19803]: I0313 01:30:55.073255 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:55.073347 master-0 kubenswrapper[19803]: I0313 01:30:55.073319 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:30:55.078601 master-0 kubenswrapper[19803]: I0313 01:30:55.078124 19803 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-master-0" oldPodUID="077dd10388b9e3e48a07382126e86621" podUID="e3798efa-4e15-463a-9528-f8868764ddbd" Mar 13 01:31:01.402224 master-0 kubenswrapper[19803]: I0313 01:31:01.402126 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-master-0" Mar 13 01:31:04.194262 master-0 kubenswrapper[19803]: I0313 01:31:04.194191 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-m8298" Mar 13 01:31:04.750210 master-0 kubenswrapper[19803]: I0313 01:31:04.750155 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Mar 13 01:31:05.246289 master-0 kubenswrapper[19803]: I0313 01:31:05.246190 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-images" Mar 13 01:31:05.249196 master-0 kubenswrapper[19803]: I0313 01:31:05.249148 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-kube-rbac-proxy-config" Mar 13 01:31:05.496712 master-0 kubenswrapper[19803]: I0313 01:31:05.496582 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"kube-root-ca.crt" Mar 13 01:31:05.744369 master-0 kubenswrapper[19803]: I0313 01:31:05.744286 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Mar 13 01:31:05.759496 master-0 kubenswrapper[19803]: I0313 01:31:05.759386 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cluster-cloud-controller-manager-dockercfg-hzxsb" Mar 13 01:31:05.759619 master-0 kubenswrapper[19803]: I0313 01:31:05.759498 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Mar 13 01:31:05.903153 master-0 kubenswrapper[19803]: I0313 01:31:05.903072 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:31:06.190141 master-0 kubenswrapper[19803]: I0313 01:31:06.189945 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Mar 13 01:31:06.330695 master-0 kubenswrapper[19803]: I0313 01:31:06.329017 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"trusted-ca-bundle" Mar 13 01:31:06.785602 master-0 kubenswrapper[19803]: I0313 01:31:06.785473 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Mar 13 01:31:07.150966 master-0 kubenswrapper[19803]: I0313 01:31:07.150792 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"kube-root-ca.crt" Mar 13 01:31:07.233305 master-0 kubenswrapper[19803]: I0313 01:31:07.233178 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Mar 13 01:31:07.334401 master-0 kubenswrapper[19803]: I0313 01:31:07.334335 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Mar 13 01:31:07.426408 master-0 kubenswrapper[19803]: I0313 01:31:07.426243 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"baremetal-kube-rbac-proxy" Mar 13 01:31:07.465570 master-0 kubenswrapper[19803]: I0313 01:31:07.465487 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-wrgnw" Mar 13 01:31:07.528784 master-0 kubenswrapper[19803]: I0313 01:31:07.528694 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Mar 13 01:31:07.664358 master-0 kubenswrapper[19803]: I0313 01:31:07.664271 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-tls" Mar 13 01:31:07.850816 master-0 kubenswrapper[19803]: I0313 01:31:07.850717 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Mar 13 01:31:07.854336 master-0 kubenswrapper[19803]: I0313 01:31:07.854251 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:31:07.884264 master-0 kubenswrapper[19803]: I0313 01:31:07.884194 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Mar 13 01:31:07.949724 master-0 kubenswrapper[19803]: I0313 01:31:07.949658 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-kube-rbac-proxy-config" Mar 13 01:31:08.047681 master-0 kubenswrapper[19803]: I0313 01:31:08.047601 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Mar 13 01:31:08.157584 master-0 kubenswrapper[19803]: I0313 01:31:08.157389 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Mar 13 01:31:08.174570 master-0 kubenswrapper[19803]: I0313 01:31:08.174471 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Mar 13 01:31:08.214563 master-0 kubenswrapper[19803]: I0313 01:31:08.214438 19803 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Mar 13 01:31:08.365885 master-0 kubenswrapper[19803]: I0313 01:31:08.365762 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Mar 13 01:31:08.385126 master-0 kubenswrapper[19803]: I0313 01:31:08.385025 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Mar 13 01:31:08.400042 master-0 kubenswrapper[19803]: I0313 01:31:08.399972 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Mar 13 01:31:08.438290 master-0 kubenswrapper[19803]: I0313 01:31:08.438088 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"performance-addon-operator-webhook-cert" Mar 13 01:31:08.443817 master-0 kubenswrapper[19803]: I0313 01:31:08.443746 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Mar 13 01:31:08.589885 master-0 kubenswrapper[19803]: I0313 01:31:08.589768 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-rules" Mar 13 01:31:08.678337 master-0 kubenswrapper[19803]: I0313 01:31:08.678257 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Mar 13 01:31:08.678848 master-0 kubenswrapper[19803]: I0313 01:31:08.678616 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Mar 13 01:31:08.683762 master-0 kubenswrapper[19803]: I0313 01:31:08.683698 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Mar 13 01:31:08.737990 master-0 kubenswrapper[19803]: I0313 01:31:08.737877 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"openshift-service-ca.crt" Mar 13 01:31:08.737990 master-0 kubenswrapper[19803]: I0313 01:31:08.737954 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Mar 13 01:31:08.853878 master-0 kubenswrapper[19803]: I0313 01:31:08.853773 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Mar 13 01:31:08.959148 master-0 kubenswrapper[19803]: I0313 01:31:08.958375 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-kube-rbac-proxy-config" Mar 13 01:31:09.075706 master-0 kubenswrapper[19803]: I0313 01:31:09.075035 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Mar 13 01:31:09.103596 master-0 kubenswrapper[19803]: I0313 01:31:09.100859 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Mar 13 01:31:09.112829 master-0 kubenswrapper[19803]: I0313 01:31:09.112728 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-dockercfg-cjzjq" Mar 13 01:31:09.151172 master-0 kubenswrapper[19803]: I0313 01:31:09.151033 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-zlp9s" Mar 13 01:31:09.159084 master-0 kubenswrapper[19803]: I0313 01:31:09.159001 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Mar 13 01:31:09.242995 master-0 kubenswrapper[19803]: I0313 01:31:09.242920 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Mar 13 01:31:09.324854 master-0 kubenswrapper[19803]: I0313 01:31:09.324752 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Mar 13 01:31:09.354572 master-0 kubenswrapper[19803]: I0313 01:31:09.354269 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Mar 13 01:31:09.384683 master-0 kubenswrapper[19803]: I0313 01:31:09.384495 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-credential-operator"/"cloud-credential-operator-serving-cert" Mar 13 01:31:09.483343 master-0 kubenswrapper[19803]: I0313 01:31:09.483228 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-tls" Mar 13 01:31:09.658151 master-0 kubenswrapper[19803]: I0313 01:31:09.657901 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-olm-operator"/"openshift-service-ca.crt" Mar 13 01:31:10.158552 master-0 kubenswrapper[19803]: I0313 01:31:10.158416 19803 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Mar 13 01:31:10.178407 master-0 kubenswrapper[19803]: I0313 01:31:10.178320 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-kmk7p" Mar 13 01:31:10.255319 master-0 kubenswrapper[19803]: I0313 01:31:10.255231 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-metrics" Mar 13 01:31:10.256424 master-0 kubenswrapper[19803]: I0313 01:31:10.256364 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Mar 13 01:31:10.352112 master-0 kubenswrapper[19803]: I0313 01:31:10.351865 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Mar 13 01:31:10.354074 master-0 kubenswrapper[19803]: I0313 01:31:10.353994 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Mar 13 01:31:10.408871 master-0 kubenswrapper[19803]: I0313 01:31:10.408754 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Mar 13 01:31:10.469021 master-0 kubenswrapper[19803]: I0313 01:31:10.468883 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Mar 13 01:31:10.542899 master-0 kubenswrapper[19803]: I0313 01:31:10.542748 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Mar 13 01:31:10.627651 master-0 kubenswrapper[19803]: I0313 01:31:10.627291 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemetry-config" Mar 13 01:31:10.689742 master-0 kubenswrapper[19803]: I0313 01:31:10.689650 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Mar 13 01:31:10.720778 master-0 kubenswrapper[19803]: I0313 01:31:10.720713 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"openshift-state-metrics-tls" Mar 13 01:31:10.850891 master-0 kubenswrapper[19803]: I0313 01:31:10.850434 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"kube-root-ca.crt" Mar 13 01:31:10.856047 master-0 kubenswrapper[19803]: I0313 01:31:10.855976 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-rbac-proxy" Mar 13 01:31:10.874779 master-0 kubenswrapper[19803]: I0313 01:31:10.874712 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Mar 13 01:31:10.880033 master-0 kubenswrapper[19803]: I0313 01:31:10.879882 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"kube-root-ca.crt" Mar 13 01:31:10.899652 master-0 kubenswrapper[19803]: I0313 01:31:10.899596 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Mar 13 01:31:11.063925 master-0 kubenswrapper[19803]: I0313 01:31:11.063870 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Mar 13 01:31:11.183987 master-0 kubenswrapper[19803]: I0313 01:31:11.183779 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Mar 13 01:31:11.216466 master-0 kubenswrapper[19803]: I0313 01:31:11.216415 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Mar 13 01:31:11.217963 master-0 kubenswrapper[19803]: I0313 01:31:11.217925 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Mar 13 01:31:11.231355 master-0 kubenswrapper[19803]: I0313 01:31:11.231290 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Mar 13 01:31:11.259959 master-0 kubenswrapper[19803]: I0313 01:31:11.259885 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Mar 13 01:31:11.342547 master-0 kubenswrapper[19803]: I0313 01:31:11.342483 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-olm-operator"/"cluster-olm-operator-serving-cert" Mar 13 01:31:11.358587 master-0 kubenswrapper[19803]: I0313 01:31:11.358514 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-24stp" Mar 13 01:31:11.370786 master-0 kubenswrapper[19803]: I0313 01:31:11.370745 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-5vwqr" Mar 13 01:31:11.521613 master-0 kubenswrapper[19803]: I0313 01:31:11.521501 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"catalogd-trusted-ca-bundle" Mar 13 01:31:11.554966 master-0 kubenswrapper[19803]: I0313 01:31:11.554908 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Mar 13 01:31:11.656950 master-0 kubenswrapper[19803]: I0313 01:31:11.656900 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Mar 13 01:31:11.684861 master-0 kubenswrapper[19803]: I0313 01:31:11.684810 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"kube-root-ca.crt" Mar 13 01:31:11.761761 master-0 kubenswrapper[19803]: I0313 01:31:11.761707 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Mar 13 01:31:11.873162 master-0 kubenswrapper[19803]: I0313 01:31:11.873035 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Mar 13 01:31:11.930245 master-0 kubenswrapper[19803]: I0313 01:31:11.930188 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-storage-operator"/"cluster-storage-operator-serving-cert" Mar 13 01:31:11.943797 master-0 kubenswrapper[19803]: I0313 01:31:11.943743 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"cluster-monitoring-operator-tls" Mar 13 01:31:11.974687 master-0 kubenswrapper[19803]: I0313 01:31:11.974624 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Mar 13 01:31:12.013598 master-0 kubenswrapper[19803]: I0313 01:31:12.013456 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Mar 13 01:31:12.019474 master-0 kubenswrapper[19803]: I0313 01:31:12.019425 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Mar 13 01:31:12.023751 master-0 kubenswrapper[19803]: I0313 01:31:12.023713 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Mar 13 01:31:12.055324 master-0 kubenswrapper[19803]: I0313 01:31:12.055264 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Mar 13 01:31:12.107277 master-0 kubenswrapper[19803]: I0313 01:31:12.106824 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Mar 13 01:31:12.112405 master-0 kubenswrapper[19803]: I0313 01:31:12.112350 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Mar 13 01:31:12.118333 master-0 kubenswrapper[19803]: I0313 01:31:12.116854 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Mar 13 01:31:12.183126 master-0 kubenswrapper[19803]: I0313 01:31:12.182967 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Mar 13 01:31:12.205811 master-0 kubenswrapper[19803]: I0313 01:31:12.205765 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"service-ca-bundle" Mar 13 01:31:12.235532 master-0 kubenswrapper[19803]: I0313 01:31:12.235455 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-client-certs" Mar 13 01:31:12.247452 master-0 kubenswrapper[19803]: I0313 01:31:12.247403 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Mar 13 01:31:12.310501 master-0 kubenswrapper[19803]: I0313 01:31:12.310446 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-mmsdc" Mar 13 01:31:12.327137 master-0 kubenswrapper[19803]: I0313 01:31:12.327092 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Mar 13 01:31:12.345344 master-0 kubenswrapper[19803]: I0313 01:31:12.345294 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy-cluster-autoscaler-operator" Mar 13 01:31:12.437279 master-0 kubenswrapper[19803]: I0313 01:31:12.437136 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Mar 13 01:31:12.456465 master-0 kubenswrapper[19803]: I0313 01:31:12.456400 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Mar 13 01:31:12.512830 master-0 kubenswrapper[19803]: I0313 01:31:12.512765 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy" Mar 13 01:31:12.522812 master-0 kubenswrapper[19803]: I0313 01:31:12.522745 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Mar 13 01:31:12.612182 master-0 kubenswrapper[19803]: I0313 01:31:12.612074 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cloud-controller-manager-operator"/"cloud-controller-manager-operator-tls" Mar 13 01:31:12.727102 master-0 kubenswrapper[19803]: I0313 01:31:12.726945 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:31:12.818052 master-0 kubenswrapper[19803]: I0313 01:31:12.817950 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"openshift-service-ca.crt" Mar 13 01:31:12.825055 master-0 kubenswrapper[19803]: I0313 01:31:12.825009 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-znq86" Mar 13 01:31:12.902927 master-0 kubenswrapper[19803]: I0313 01:31:12.902859 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-dockercfg-hxxzs" Mar 13 01:31:12.916978 master-0 kubenswrapper[19803]: I0313 01:31:12.916908 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Mar 13 01:31:12.956874 master-0 kubenswrapper[19803]: I0313 01:31:12.956787 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Mar 13 01:31:13.031736 master-0 kubenswrapper[19803]: I0313 01:31:13.031632 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"cluster-baremetal-operator-images" Mar 13 01:31:13.040016 master-0 kubenswrapper[19803]: I0313 01:31:13.039979 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Mar 13 01:31:13.101962 master-0 kubenswrapper[19803]: I0313 01:31:13.101875 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-state-metrics-tls" Mar 13 01:31:13.158589 master-0 kubenswrapper[19803]: I0313 01:31:13.158495 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Mar 13 01:31:13.164291 master-0 kubenswrapper[19803]: I0313 01:31:13.164243 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-kube-rbac-proxy-config" Mar 13 01:31:13.257591 master-0 kubenswrapper[19803]: I0313 01:31:13.257478 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Mar 13 01:31:13.330626 master-0 kubenswrapper[19803]: I0313 01:31:13.330434 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Mar 13 01:31:13.345369 master-0 kubenswrapper[19803]: I0313 01:31:13.345298 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-dockercfg-9pdlp" Mar 13 01:31:13.372650 master-0 kubenswrapper[19803]: I0313 01:31:13.363904 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Mar 13 01:31:13.409549 master-0 kubenswrapper[19803]: I0313 01:31:13.409464 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Mar 13 01:31:13.449061 master-0 kubenswrapper[19803]: I0313 01:31:13.448237 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"trusted-ca" Mar 13 01:31:13.539472 master-0 kubenswrapper[19803]: I0313 01:31:13.539360 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-autoscaler-operator-cert" Mar 13 01:31:13.618788 master-0 kubenswrapper[19803]: I0313 01:31:13.618608 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Mar 13 01:31:13.652038 master-0 kubenswrapper[19803]: I0313 01:31:13.651979 19803 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Mar 13 01:31:13.707902 master-0 kubenswrapper[19803]: I0313 01:31:13.707831 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Mar 13 01:31:13.718488 master-0 kubenswrapper[19803]: I0313 01:31:13.718429 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"default-dockercfg-n8jpw" Mar 13 01:31:13.748625 master-0 kubenswrapper[19803]: I0313 01:31:13.748579 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Mar 13 01:31:13.749173 master-0 kubenswrapper[19803]: I0313 01:31:13.749123 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Mar 13 01:31:13.757758 master-0 kubenswrapper[19803]: I0313 01:31:13.757468 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Mar 13 01:31:13.772693 master-0 kubenswrapper[19803]: I0313 01:31:13.772529 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Mar 13 01:31:13.840284 master-0 kubenswrapper[19803]: I0313 01:31:13.840219 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Mar 13 01:31:13.920762 master-0 kubenswrapper[19803]: I0313 01:31:13.920501 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Mar 13 01:31:14.057900 master-0 kubenswrapper[19803]: I0313 01:31:14.057825 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-mbxd4" Mar 13 01:31:14.112907 master-0 kubenswrapper[19803]: I0313 01:31:14.112830 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Mar 13 01:31:14.115277 master-0 kubenswrapper[19803]: I0313 01:31:14.115216 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-node-tuning-operator"/"node-tuning-operator-tls" Mar 13 01:31:14.227946 master-0 kubenswrapper[19803]: I0313 01:31:14.227784 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Mar 13 01:31:14.233483 master-0 kubenswrapper[19803]: I0313 01:31:14.233432 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-tls" Mar 13 01:31:14.239819 master-0 kubenswrapper[19803]: I0313 01:31:14.239777 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-tls" Mar 13 01:31:14.295896 master-0 kubenswrapper[19803]: I0313 01:31:14.294617 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Mar 13 01:31:14.295896 master-0 kubenswrapper[19803]: I0313 01:31:14.295079 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Mar 13 01:31:14.335835 master-0 kubenswrapper[19803]: I0313 01:31:14.335746 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"openshift-service-ca.crt" Mar 13 01:31:14.377472 master-0 kubenswrapper[19803]: I0313 01:31:14.377386 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Mar 13 01:31:14.391229 master-0 kubenswrapper[19803]: I0313 01:31:14.391139 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-dockercfg-6mpzn" Mar 13 01:31:14.402753 master-0 kubenswrapper[19803]: I0313 01:31:14.402664 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Mar 13 01:31:14.468292 master-0 kubenswrapper[19803]: I0313 01:31:14.468232 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Mar 13 01:31:14.501541 master-0 kubenswrapper[19803]: I0313 01:31:14.501304 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Mar 13 01:31:14.503086 master-0 kubenswrapper[19803]: I0313 01:31:14.503002 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Mar 13 01:31:14.555544 master-0 kubenswrapper[19803]: I0313 01:31:14.555458 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"openshift-service-ca.crt" Mar 13 01:31:14.644435 master-0 kubenswrapper[19803]: I0313 01:31:14.644159 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Mar 13 01:31:14.711621 master-0 kubenswrapper[19803]: I0313 01:31:14.711499 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Mar 13 01:31:14.733926 master-0 kubenswrapper[19803]: I0313 01:31:14.733841 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Mar 13 01:31:14.783729 master-0 kubenswrapper[19803]: I0313 01:31:14.783540 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Mar 13 01:31:14.809721 master-0 kubenswrapper[19803]: I0313 01:31:14.809631 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Mar 13 01:31:14.815249 master-0 kubenswrapper[19803]: I0313 01:31:14.815193 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Mar 13 01:31:14.839101 master-0 kubenswrapper[19803]: I0313 01:31:14.839028 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Mar 13 01:31:14.910644 master-0 kubenswrapper[19803]: I0313 01:31:14.910565 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Mar 13 01:31:14.973924 master-0 kubenswrapper[19803]: I0313 01:31:14.973847 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-jgxk7" Mar 13 01:31:15.019133 master-0 kubenswrapper[19803]: I0313 01:31:15.019049 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-controller"/"operator-controller-trusted-ca-bundle" Mar 13 01:31:15.057364 master-0 kubenswrapper[19803]: I0313 01:31:15.057183 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-webhook-server-cert" Mar 13 01:31:15.096758 master-0 kubenswrapper[19803]: I0313 01:31:15.096680 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-insights"/"openshift-service-ca.crt" Mar 13 01:31:15.104399 master-0 kubenswrapper[19803]: I0313 01:31:15.104358 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-ffc5m" Mar 13 01:31:15.160545 master-0 kubenswrapper[19803]: I0313 01:31:15.160414 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-root-ca.crt" Mar 13 01:31:15.164017 master-0 kubenswrapper[19803]: I0313 01:31:15.163947 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kubelet-serving-ca-bundle" Mar 13 01:31:15.215844 master-0 kubenswrapper[19803]: I0313 01:31:15.215764 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Mar 13 01:31:15.309640 master-0 kubenswrapper[19803]: I0313 01:31:15.309451 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Mar 13 01:31:15.323532 master-0 kubenswrapper[19803]: I0313 01:31:15.323432 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Mar 13 01:31:15.410948 master-0 kubenswrapper[19803]: I0313 01:31:15.410858 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Mar 13 01:31:15.434738 master-0 kubenswrapper[19803]: I0313 01:31:15.434672 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Mar 13 01:31:15.458653 master-0 kubenswrapper[19803]: I0313 01:31:15.458572 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Mar 13 01:31:15.539474 master-0 kubenswrapper[19803]: I0313 01:31:15.539385 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Mar 13 01:31:15.624005 master-0 kubenswrapper[19803]: I0313 01:31:15.623831 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-5hmj8ip2t2ob4" Mar 13 01:31:15.634313 master-0 kubenswrapper[19803]: I0313 01:31:15.634275 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Mar 13 01:31:15.657375 master-0 kubenswrapper[19803]: I0313 01:31:15.657309 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-server-audit-profiles" Mar 13 01:31:15.666724 master-0 kubenswrapper[19803]: I0313 01:31:15.666696 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Mar 13 01:31:15.707324 master-0 kubenswrapper[19803]: I0313 01:31:15.707245 19803 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Mar 13 01:31:15.753490 master-0 kubenswrapper[19803]: I0313 01:31:15.753394 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Mar 13 01:31:15.781899 master-0 kubenswrapper[19803]: I0313 01:31:15.781832 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"metrics-server-dockercfg-n5nfx" Mar 13 01:31:15.804309 master-0 kubenswrapper[19803]: I0313 01:31:15.804245 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Mar 13 01:31:15.822891 master-0 kubenswrapper[19803]: I0313 01:31:15.822794 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Mar 13 01:31:15.824246 master-0 kubenswrapper[19803]: I0313 01:31:15.824176 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Mar 13 01:31:15.985293 master-0 kubenswrapper[19803]: I0313 01:31:15.985238 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Mar 13 01:31:16.002856 master-0 kubenswrapper[19803]: I0313 01:31:16.002809 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Mar 13 01:31:16.114322 master-0 kubenswrapper[19803]: I0313 01:31:16.114259 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Mar 13 01:31:16.209533 master-0 kubenswrapper[19803]: I0313 01:31:16.209448 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:31:16.232894 master-0 kubenswrapper[19803]: I0313 01:31:16.232829 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Mar 13 01:31:16.251744 master-0 kubenswrapper[19803]: I0313 01:31:16.251614 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Mar 13 01:31:16.300156 master-0 kubenswrapper[19803]: I0313 01:31:16.300072 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Mar 13 01:31:16.318483 master-0 kubenswrapper[19803]: I0313 01:31:16.318411 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Mar 13 01:31:16.395006 master-0 kubenswrapper[19803]: I0313 01:31:16.394919 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Mar 13 01:31:16.414498 master-0 kubenswrapper[19803]: I0313 01:31:16.414424 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Mar 13 01:31:16.468622 master-0 kubenswrapper[19803]: I0313 01:31:16.468557 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Mar 13 01:31:16.540781 master-0 kubenswrapper[19803]: I0313 01:31:16.540019 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-7ghfl" Mar 13 01:31:16.540781 master-0 kubenswrapper[19803]: I0313 01:31:16.540693 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"whereabouts-config" Mar 13 01:31:16.589772 master-0 kubenswrapper[19803]: I0313 01:31:16.589694 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Mar 13 01:31:16.762911 master-0 kubenswrapper[19803]: I0313 01:31:16.762784 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Mar 13 01:31:16.833588 master-0 kubenswrapper[19803]: I0313 01:31:16.833292 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Mar 13 01:31:16.998688 master-0 kubenswrapper[19803]: I0313 01:31:16.992566 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"monitoring-plugin-cert" Mar 13 01:31:17.018304 master-0 kubenswrapper[19803]: I0313 01:31:17.018231 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-operator-admission-webhook-tls" Mar 13 01:31:17.085628 master-0 kubenswrapper[19803]: I0313 01:31:17.085396 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Mar 13 01:31:17.160338 master-0 kubenswrapper[19803]: I0313 01:31:17.160249 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Mar 13 01:31:17.200917 master-0 kubenswrapper[19803]: I0313 01:31:17.200826 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Mar 13 01:31:17.247564 master-0 kubenswrapper[19803]: I0313 01:31:17.247416 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Mar 13 01:31:17.398196 master-0 kubenswrapper[19803]: I0313 01:31:17.397976 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Mar 13 01:31:17.498471 master-0 kubenswrapper[19803]: I0313 01:31:17.498408 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-catalogd"/"catalogserver-cert" Mar 13 01:31:17.533172 master-0 kubenswrapper[19803]: I0313 01:31:17.533098 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Mar 13 01:31:17.628079 master-0 kubenswrapper[19803]: I0313 01:31:17.628016 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Mar 13 01:31:17.721173 master-0 kubenswrapper[19803]: I0313 01:31:17.721007 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-g9s2p" Mar 13 01:31:17.818558 master-0 kubenswrapper[19803]: I0313 01:31:17.816338 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Mar 13 01:31:17.887083 master-0 kubenswrapper[19803]: I0313 01:31:17.886955 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Mar 13 01:31:17.902933 master-0 kubenswrapper[19803]: I0313 01:31:17.902848 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Mar 13 01:31:17.911138 master-0 kubenswrapper[19803]: I0313 01:31:17.911029 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-m4df5" Mar 13 01:31:17.926103 master-0 kubenswrapper[19803]: I0313 01:31:17.925966 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Mar 13 01:31:17.939429 master-0 kubenswrapper[19803]: I0313 01:31:17.939332 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cloud-credential-operator"/"cco-trusted-ca" Mar 13 01:31:17.962836 master-0 kubenswrapper[19803]: I0313 01:31:17.962756 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Mar 13 01:31:18.085710 master-0 kubenswrapper[19803]: I0313 01:31:18.085642 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Mar 13 01:31:18.135259 master-0 kubenswrapper[19803]: I0313 01:31:18.135193 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Mar 13 01:31:18.152335 master-0 kubenswrapper[19803]: I0313 01:31:18.152274 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Mar 13 01:31:18.157899 master-0 kubenswrapper[19803]: I0313 01:31:18.157832 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Mar 13 01:31:18.297840 master-0 kubenswrapper[19803]: I0313 01:31:18.297704 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"kube-state-metrics-custom-resource-state-configmap" Mar 13 01:31:18.391908 master-0 kubenswrapper[19803]: I0313 01:31:18.391672 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Mar 13 01:31:18.409043 master-0 kubenswrapper[19803]: I0313 01:31:18.408966 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Mar 13 01:31:18.476007 master-0 kubenswrapper[19803]: I0313 01:31:18.475922 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-insights"/"openshift-insights-serving-cert" Mar 13 01:31:18.552931 master-0 kubenswrapper[19803]: I0313 01:31:18.552867 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Mar 13 01:31:18.628895 master-0 kubenswrapper[19803]: I0313 01:31:18.628755 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Mar 13 01:31:18.655386 master-0 kubenswrapper[19803]: I0313 01:31:18.655233 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Mar 13 01:31:18.670594 master-0 kubenswrapper[19803]: I0313 01:31:18.670494 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"openshift-service-ca.crt" Mar 13 01:31:18.798178 master-0 kubenswrapper[19803]: I0313 01:31:18.798103 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Mar 13 01:31:18.802438 master-0 kubenswrapper[19803]: I0313 01:31:18.802392 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Mar 13 01:31:18.822407 master-0 kubenswrapper[19803]: I0313 01:31:18.822342 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Mar 13 01:31:18.824309 master-0 kubenswrapper[19803]: I0313 01:31:18.824261 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Mar 13 01:31:18.918010 master-0 kubenswrapper[19803]: I0313 01:31:18.917830 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"cluster-baremetal-operator-tls" Mar 13 01:31:18.925808 master-0 kubenswrapper[19803]: I0313 01:31:18.925764 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-storage-operator"/"kube-root-ca.crt" Mar 13 01:31:18.992895 master-0 kubenswrapper[19803]: I0313 01:31:18.992803 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-grpc-tls-fpcr0ruobri08" Mar 13 01:31:18.993607 master-0 kubenswrapper[19803]: I0313 01:31:18.993536 19803 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Mar 13 01:31:19.002160 master-0 kubenswrapper[19803]: I0313 01:31:19.002080 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-master-0"] Mar 13 01:31:19.002333 master-0 kubenswrapper[19803]: I0313 01:31:19.002185 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5cd7664db7-4ljbn","openshift-kube-apiserver/kube-apiserver-master-0","openshift-controller-manager/controller-manager-5bc647b4dd-x4f22","openshift-console/console-5c44cb5779-v77m6","openshift-authentication/oauth-openshift-575f7bbb59-ntckb","openshift-monitoring/prometheus-k8s-0","openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb","openshift-monitoring/alertmanager-main-0"] Mar 13 01:31:19.002661 master-0 kubenswrapper[19803]: E0313 01:31:19.002613 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" containerName="installer" Mar 13 01:31:19.002661 master-0 kubenswrapper[19803]: I0313 01:31:19.002654 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" containerName="installer" Mar 13 01:31:19.002997 master-0 kubenswrapper[19803]: I0313 01:31:19.002958 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="6481abb4-a276-4bf1-b16b-271e2ce7936e" containerName="installer" Mar 13 01:31:19.003313 master-0 kubenswrapper[19803]: I0313 01:31:19.003095 19803 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:31:19.003313 master-0 kubenswrapper[19803]: I0313 01:31:19.003155 19803 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-master-0" podUID="da10e656-1fb2-4dad-bb7c-2d5c150724b2" Mar 13 01:31:19.005463 master-0 kubenswrapper[19803]: I0313 01:31:19.005415 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.007501 master-0 kubenswrapper[19803]: I0313 01:31:19.007423 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Mar 13 01:31:19.008475 master-0 kubenswrapper[19803]: I0313 01:31:19.008429 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Mar 13 01:31:19.010211 master-0 kubenswrapper[19803]: I0313 01:31:19.009324 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Mar 13 01:31:19.010211 master-0 kubenswrapper[19803]: I0313 01:31:19.009633 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-6zvs6" Mar 13 01:31:19.011665 master-0 kubenswrapper[19803]: I0313 01:31:19.011626 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Mar 13 01:31:19.012259 master-0 kubenswrapper[19803]: I0313 01:31:19.012227 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Mar 13 01:31:19.023787 master-0 kubenswrapper[19803]: I0313 01:31:19.023693 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.028986 master-0 kubenswrapper[19803]: I0313 01:31:19.028929 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.030055 master-0 kubenswrapper[19803]: I0313 01:31:19.029730 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-metric" Mar 13 01:31:19.030143 master-0 kubenswrapper[19803]: I0313 01:31:19.030060 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.033270 master-0 kubenswrapper[19803]: I0313 01:31:19.032836 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-dockercfg-wq6hg" Mar 13 01:31:19.033270 master-0 kubenswrapper[19803]: I0313 01:31:19.033021 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy-web" Mar 13 01:31:19.034652 master-0 kubenswrapper[19803]: I0313 01:31:19.033669 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.035366 master-0 kubenswrapper[19803]: I0313 01:31:19.034873 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.038883 master-0 kubenswrapper[19803]: I0313 01:31:19.035694 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:31:19.038883 master-0 kubenswrapper[19803]: I0313 01:31:19.035884 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.038883 master-0 kubenswrapper[19803]: I0313 01:31:19.037907 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Mar 13 01:31:19.050898 master-0 kubenswrapper[19803]: I0313 01:31:19.049751 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-kube-rbac-proxy" Mar 13 01:31:19.050898 master-0 kubenswrapper[19803]: I0313 01:31:19.049751 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls-assets-0" Mar 13 01:31:19.050898 master-0 kubenswrapper[19803]: I0313 01:31:19.049854 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-prometheus-http-client-file" Mar 13 01:31:19.050898 master-0 kubenswrapper[19803]: I0313 01:31:19.049959 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"kube-rbac-proxy" Mar 13 01:31:19.050898 master-0 kubenswrapper[19803]: I0313 01:31:19.050059 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-web-config" Mar 13 01:31:19.050898 master-0 kubenswrapper[19803]: I0313 01:31:19.050205 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-kube-rbac-proxy-web" Mar 13 01:31:19.054023 master-0 kubenswrapper[19803]: I0313 01:31:19.052108 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-generated" Mar 13 01:31:19.054023 master-0 kubenswrapper[19803]: I0313 01:31:19.052501 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-web-config" Mar 13 01:31:19.054023 master-0 kubenswrapper[19803]: I0313 01:31:19.052777 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Mar 13 01:31:19.054757 master-0 kubenswrapper[19803]: I0313 01:31:19.054683 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Mar 13 01:31:19.055435 master-0 kubenswrapper[19803]: I0313 01:31:19.055350 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Mar 13 01:31:19.056614 master-0 kubenswrapper[19803]: I0313 01:31:19.056545 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Mar 13 01:31:19.056739 master-0 kubenswrapper[19803]: I0313 01:31:19.056680 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Mar 13 01:31:19.057217 master-0 kubenswrapper[19803]: I0313 01:31:19.057099 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Mar 13 01:31:19.057334 master-0 kubenswrapper[19803]: I0313 01:31:19.057305 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Mar 13 01:31:19.057420 master-0 kubenswrapper[19803]: I0313 01:31:19.057391 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Mar 13 01:31:19.057600 master-0 kubenswrapper[19803]: I0313 01:31:19.057530 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-wsvvx" Mar 13 01:31:19.057698 master-0 kubenswrapper[19803]: I0313 01:31:19.057608 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Mar 13 01:31:19.057698 master-0 kubenswrapper[19803]: I0313 01:31:19.057657 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"alertmanager-main-tls" Mar 13 01:31:19.057862 master-0 kubenswrapper[19803]: I0313 01:31:19.057768 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Mar 13 01:31:19.057862 master-0 kubenswrapper[19803]: I0313 01:31:19.057833 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"serving-certs-ca-bundle" Mar 13 01:31:19.058009 master-0 kubenswrapper[19803]: I0313 01:31:19.057926 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Mar 13 01:31:19.058009 master-0 kubenswrapper[19803]: I0313 01:31:19.057938 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls-assets-0" Mar 13 01:31:19.058311 master-0 kubenswrapper[19803]: I0313 01:31:19.058210 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Mar 13 01:31:19.058393 master-0 kubenswrapper[19803]: I0313 01:31:19.058350 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-dockercfg-f4w47" Mar 13 01:31:19.058694 master-0 kubenswrapper[19803]: I0313 01:31:19.058554 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-thanos-sidecar-tls" Mar 13 01:31:19.058790 master-0 kubenswrapper[19803]: I0313 01:31:19.058748 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Mar 13 01:31:19.058872 master-0 kubenswrapper[19803]: I0313 01:31:19.058854 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-tls" Mar 13 01:31:19.058953 master-0 kubenswrapper[19803]: I0313 01:31:19.058941 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-wwxvr" Mar 13 01:31:19.059127 master-0 kubenswrapper[19803]: I0313 01:31:19.059057 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Mar 13 01:31:19.059221 master-0 kubenswrapper[19803]: I0313 01:31:19.059151 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Mar 13 01:31:19.059304 master-0 kubenswrapper[19803]: I0313 01:31:19.059259 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Mar 13 01:31:19.059412 master-0 kubenswrapper[19803]: I0313 01:31:19.059344 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Mar 13 01:31:19.059412 master-0 kubenswrapper[19803]: I0313 01:31:19.057505 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Mar 13 01:31:19.060959 master-0 kubenswrapper[19803]: I0313 01:31:19.060797 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Mar 13 01:31:19.062050 master-0 kubenswrapper[19803]: I0313 01:31:19.061950 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Mar 13 01:31:19.062315 master-0 kubenswrapper[19803]: I0313 01:31:19.062218 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s" Mar 13 01:31:19.062315 master-0 kubenswrapper[19803]: I0313 01:31:19.062293 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"prometheus-k8s-grpc-tls-ftucsgvpi0546" Mar 13 01:31:19.067788 master-0 kubenswrapper[19803]: I0313 01:31:19.064273 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Mar 13 01:31:19.067788 master-0 kubenswrapper[19803]: I0313 01:31:19.066066 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-k8s-rulefiles-0" Mar 13 01:31:19.067788 master-0 kubenswrapper[19803]: I0313 01:31:19.067764 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"alertmanager-trusted-ca-bundle" Mar 13 01:31:19.085501 master-0 kubenswrapper[19803]: I0313 01:31:19.085401 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Mar 13 01:31:19.095826 master-0 kubenswrapper[19803]: I0313 01:31:19.095681 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"prometheus-trusted-ca-bundle" Mar 13 01:31:19.107537 master-0 kubenswrapper[19803]: I0313 01:31:19.106816 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Mar 13 01:31:19.124150 master-0 kubenswrapper[19803]: I0313 01:31:19.124090 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Mar 13 01:31:19.126443 master-0 kubenswrapper[19803]: I0313 01:31:19.126414 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Mar 13 01:31:19.129822 master-0 kubenswrapper[19803]: I0313 01:31:19.128972 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Mar 13 01:31:19.148750 master-0 kubenswrapper[19803]: I0313 01:31:19.148657 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-master-0" podStartSLOduration=25.148632776 podStartE2EDuration="25.148632776s" podCreationTimestamp="2026-03-13 01:30:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:31:19.121970089 +0000 UTC m=+827.087117768" watchObservedRunningTime="2026-03-13 01:31:19.148632776 +0000 UTC m=+827.113780465" Mar 13 01:31:19.202544 master-0 kubenswrapper[19803]: I0313 01:31:19.202482 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.202811 master-0 kubenswrapper[19803]: I0313 01:31:19.202796 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-web-config\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.202909 master-0 kubenswrapper[19803]: I0313 01:31:19.202896 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-trusted-ca-bundle\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.202998 master-0 kubenswrapper[19803]: I0313 01:31:19.202986 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-serving-cert\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.203096 master-0 kubenswrapper[19803]: I0313 01:31:19.203082 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.203177 master-0 kubenswrapper[19803]: I0313 01:31:19.203165 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f797fd7-03a8-4b62-82c2-2015dd076114-config-out\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.203300 master-0 kubenswrapper[19803]: I0313 01:31:19.203286 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.203403 master-0 kubenswrapper[19803]: I0313 01:31:19.203389 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.203491 master-0 kubenswrapper[19803]: I0313 01:31:19.203478 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nvzt4\" (UniqueName: \"kubernetes.io/projected/2dc7825a-7953-4bbc-908e-5a65741568e7-kube-api-access-nvzt4\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.203612 master-0 kubenswrapper[19803]: I0313 01:31:19.203599 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-oauth-serving-cert\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.203718 master-0 kubenswrapper[19803]: I0313 01:31:19.203705 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9f797fd7-03a8-4b62-82c2-2015dd076114-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.203811 master-0 kubenswrapper[19803]: I0313 01:31:19.203800 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f797fd7-03a8-4b62-82c2-2015dd076114-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.203901 master-0 kubenswrapper[19803]: I0313 01:31:19.203887 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.203991 master-0 kubenswrapper[19803]: I0313 01:31:19.203978 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-oauth-config\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.204079 master-0 kubenswrapper[19803]: I0313 01:31:19.204067 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-login\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.204173 master-0 kubenswrapper[19803]: I0313 01:31:19.204160 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e9feb3b-8bbd-4778-b23c-153143635880-client-ca\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.204270 master-0 kubenswrapper[19803]: I0313 01:31:19.204257 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-console-config\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.204366 master-0 kubenswrapper[19803]: I0313 01:31:19.204351 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.204474 master-0 kubenswrapper[19803]: I0313 01:31:19.204461 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.204669 master-0 kubenswrapper[19803]: I0313 01:31:19.204654 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.204885 master-0 kubenswrapper[19803]: I0313 01:31:19.204869 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.204983 master-0 kubenswrapper[19803]: I0313 01:31:19.204970 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.205069 master-0 kubenswrapper[19803]: I0313 01:31:19.205058 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-web-config\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.205159 master-0 kubenswrapper[19803]: I0313 01:31:19.205147 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8gk8\" (UniqueName: \"kubernetes.io/projected/9f797fd7-03a8-4b62-82c2-2015dd076114-kube-api-access-m8gk8\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.205257 master-0 kubenswrapper[19803]: I0313 01:31:19.205241 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9f797fd7-03a8-4b62-82c2-2015dd076114-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.205340 master-0 kubenswrapper[19803]: I0313 01:31:19.205328 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5c6c\" (UniqueName: \"kubernetes.io/projected/1e9feb3b-8bbd-4778-b23c-153143635880-kube-api-access-b5c6c\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.205431 master-0 kubenswrapper[19803]: I0313 01:31:19.205419 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-serving-cert\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.205538 master-0 kubenswrapper[19803]: I0313 01:31:19.205509 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8924\" (UniqueName: \"kubernetes.io/projected/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-kube-api-access-k8924\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.205631 master-0 kubenswrapper[19803]: I0313 01:31:19.205618 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.205711 master-0 kubenswrapper[19803]: I0313 01:31:19.205700 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.205802 master-0 kubenswrapper[19803]: I0313 01:31:19.205788 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-oauth-config\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.205893 master-0 kubenswrapper[19803]: I0313 01:31:19.205881 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-service-ca\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.205985 master-0 kubenswrapper[19803]: I0313 01:31:19.205972 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.206070 master-0 kubenswrapper[19803]: I0313 01:31:19.206058 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-error\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.206158 master-0 kubenswrapper[19803]: I0313 01:31:19.206144 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.206260 master-0 kubenswrapper[19803]: I0313 01:31:19.206246 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.206370 master-0 kubenswrapper[19803]: I0313 01:31:19.206356 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.206474 master-0 kubenswrapper[19803]: I0313 01:31:19.206456 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc84d\" (UniqueName: \"kubernetes.io/projected/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-kube-api-access-mc84d\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.206564 master-0 kubenswrapper[19803]: I0313 01:31:19.206552 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.206706 master-0 kubenswrapper[19803]: I0313 01:31:19.206691 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-client-ca\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.206797 master-0 kubenswrapper[19803]: I0313 01:31:19.206783 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/83d4214e-5ca9-401d-bd0c-860f02034a10-config-out\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.206877 master-0 kubenswrapper[19803]: I0313 01:31:19.206865 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.206964 master-0 kubenswrapper[19803]: I0313 01:31:19.206952 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-audit-policies\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.207081 master-0 kubenswrapper[19803]: I0313 01:31:19.207056 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.207170 master-0 kubenswrapper[19803]: I0313 01:31:19.207158 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-config\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.207309 master-0 kubenswrapper[19803]: I0313 01:31:19.207292 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/83d4214e-5ca9-401d-bd0c-860f02034a10-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.207566 master-0 kubenswrapper[19803]: I0313 01:31:19.207497 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-audit-dir\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.207710 master-0 kubenswrapper[19803]: I0313 01:31:19.207690 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-proxy-ca-bundles\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.207803 master-0 kubenswrapper[19803]: I0313 01:31:19.207790 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.207910 master-0 kubenswrapper[19803]: I0313 01:31:19.207893 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-trusted-ca-bundle\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.208034 master-0 kubenswrapper[19803]: I0313 01:31:19.208019 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.208131 master-0 kubenswrapper[19803]: I0313 01:31:19.208118 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc7825a-7953-4bbc-908e-5a65741568e7-serving-cert\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.208238 master-0 kubenswrapper[19803]: I0313 01:31:19.208220 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e9feb3b-8bbd-4778-b23c-153143635880-serving-cert\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.208489 master-0 kubenswrapper[19803]: I0313 01:31:19.208407 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.208636 master-0 kubenswrapper[19803]: I0313 01:31:19.208620 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-config\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.208742 master-0 kubenswrapper[19803]: I0313 01:31:19.208729 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jjvzc\" (UniqueName: \"kubernetes.io/projected/f65326a0-0c48-4424-8269-135d5e800127-kube-api-access-jjvzc\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.208832 master-0 kubenswrapper[19803]: I0313 01:31:19.208819 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-config-volume\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.208924 master-0 kubenswrapper[19803]: I0313 01:31:19.208912 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvlpf\" (UniqueName: \"kubernetes.io/projected/83d4214e-5ca9-401d-bd0c-860f02034a10-kube-api-access-zvlpf\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.209019 master-0 kubenswrapper[19803]: I0313 01:31:19.209007 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f797fd7-03a8-4b62-82c2-2015dd076114-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.209122 master-0 kubenswrapper[19803]: I0313 01:31:19.209108 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.209215 master-0 kubenswrapper[19803]: I0313 01:31:19.209202 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-config\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.209304 master-0 kubenswrapper[19803]: I0313 01:31:19.209293 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9feb3b-8bbd-4778-b23c-153143635880-config\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.209395 master-0 kubenswrapper[19803]: I0313 01:31:19.209381 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-service-ca\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.209824 master-0 kubenswrapper[19803]: I0313 01:31:19.209754 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-session\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.209956 master-0 kubenswrapper[19803]: I0313 01:31:19.209937 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.210032 master-0 kubenswrapper[19803]: I0313 01:31:19.210018 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-oauth-serving-cert\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.271023 master-0 kubenswrapper[19803]: I0313 01:31:19.270966 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Mar 13 01:31:19.279186 master-0 kubenswrapper[19803]: I0313 01:31:19.279158 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Mar 13 01:31:19.306097 master-0 kubenswrapper[19803]: I0313 01:31:19.306046 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-master-0" Mar 13 01:31:19.311339 master-0 kubenswrapper[19803]: I0313 01:31:19.311301 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-config-volume\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.311483 master-0 kubenswrapper[19803]: I0313 01:31:19.311351 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvlpf\" (UniqueName: \"kubernetes.io/projected/83d4214e-5ca9-401d-bd0c-860f02034a10-kube-api-access-zvlpf\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.311483 master-0 kubenswrapper[19803]: I0313 01:31:19.311381 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f797fd7-03a8-4b62-82c2-2015dd076114-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.311483 master-0 kubenswrapper[19803]: I0313 01:31:19.311401 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.311483 master-0 kubenswrapper[19803]: I0313 01:31:19.311423 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-config\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.311483 master-0 kubenswrapper[19803]: I0313 01:31:19.311444 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9feb3b-8bbd-4778-b23c-153143635880-config\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.311483 master-0 kubenswrapper[19803]: I0313 01:31:19.311459 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-service-ca\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.311483 master-0 kubenswrapper[19803]: I0313 01:31:19.311481 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-session\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311497 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311529 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-oauth-serving-cert\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311547 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311568 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-web-config\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311587 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-serving-cert\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311604 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-trusted-ca-bundle\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311620 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311637 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f797fd7-03a8-4b62-82c2-2015dd076114-config-out\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.311837 master-0 kubenswrapper[19803]: I0313 01:31:19.311671 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.312255 master-0 kubenswrapper[19803]: I0313 01:31:19.312233 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nvzt4\" (UniqueName: \"kubernetes.io/projected/2dc7825a-7953-4bbc-908e-5a65741568e7-kube-api-access-nvzt4\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.312675 master-0 kubenswrapper[19803]: I0313 01:31:19.312478 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-oauth-serving-cert\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.312675 master-0 kubenswrapper[19803]: I0313 01:31:19.312529 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9f797fd7-03a8-4b62-82c2-2015dd076114-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.312675 master-0 kubenswrapper[19803]: I0313 01:31:19.312550 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f797fd7-03a8-4b62-82c2-2015dd076114-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.312675 master-0 kubenswrapper[19803]: I0313 01:31:19.312572 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.312675 master-0 kubenswrapper[19803]: I0313 01:31:19.312589 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-oauth-config\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.312675 master-0 kubenswrapper[19803]: I0313 01:31:19.312609 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-login\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.313017 master-0 kubenswrapper[19803]: I0313 01:31:19.312933 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-db\" (UniqueName: \"kubernetes.io/empty-dir/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-k8s-db\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.313101 master-0 kubenswrapper[19803]: I0313 01:31:19.312954 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.313153 master-0 kubenswrapper[19803]: I0313 01:31:19.313124 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e9feb3b-8bbd-4778-b23c-153143635880-client-ca\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.313225 master-0 kubenswrapper[19803]: I0313 01:31:19.313193 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-console-config\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.313279 master-0 kubenswrapper[19803]: I0313 01:31:19.313237 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.313332 master-0 kubenswrapper[19803]: I0313 01:31:19.313277 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.313381 master-0 kubenswrapper[19803]: I0313 01:31:19.313329 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.313430 master-0 kubenswrapper[19803]: I0313 01:31:19.313385 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.313473 master-0 kubenswrapper[19803]: I0313 01:31:19.313437 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8gk8\" (UniqueName: \"kubernetes.io/projected/9f797fd7-03a8-4b62-82c2-2015dd076114-kube-api-access-m8gk8\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.314059 master-0 kubenswrapper[19803]: I0313 01:31:19.313485 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.314148 master-0 kubenswrapper[19803]: I0313 01:31:19.314067 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/9f797fd7-03a8-4b62-82c2-2015dd076114-metrics-client-ca\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.316136 master-0 kubenswrapper[19803]: I0313 01:31:19.316079 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-metrics-client-ca\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.316821 master-0 kubenswrapper[19803]: I0313 01:31:19.316787 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-kubelet-serving-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-kubelet-serving-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.316975 master-0 kubenswrapper[19803]: I0313 01:31:19.316860 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.317109 master-0 kubenswrapper[19803]: I0313 01:31:19.317051 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/1e9feb3b-8bbd-4778-b23c-153143635880-client-ca\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.318024 master-0 kubenswrapper[19803]: I0313 01:31:19.317961 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.318229 master-0 kubenswrapper[19803]: I0313 01:31:19.318188 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-console-config\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.318433 master-0 kubenswrapper[19803]: I0313 01:31:19.318395 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-trusted-ca-bundle\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.318653 master-0 kubenswrapper[19803]: I0313 01:31:19.318611 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-oauth-serving-cert\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.320591 master-0 kubenswrapper[19803]: I0313 01:31:19.318721 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-service-ca\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.320688 master-0 kubenswrapper[19803]: I0313 01:31:19.318970 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/9f797fd7-03a8-4b62-82c2-2015dd076114-config-out\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.320688 master-0 kubenswrapper[19803]: I0313 01:31:19.319153 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/9f797fd7-03a8-4b62-82c2-2015dd076114-tls-assets\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.320688 master-0 kubenswrapper[19803]: I0313 01:31:19.319728 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1e9feb3b-8bbd-4778-b23c-153143635880-config\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.320688 master-0 kubenswrapper[19803]: I0313 01:31:19.319834 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-web-config\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.320875 master-0 kubenswrapper[19803]: I0313 01:31:19.320717 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9f797fd7-03a8-4b62-82c2-2015dd076114-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.320875 master-0 kubenswrapper[19803]: I0313 01:31:19.320771 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5c6c\" (UniqueName: \"kubernetes.io/projected/1e9feb3b-8bbd-4778-b23c-153143635880-kube-api-access-b5c6c\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.320875 master-0 kubenswrapper[19803]: I0313 01:31:19.320802 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-serving-cert\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.320875 master-0 kubenswrapper[19803]: I0313 01:31:19.320838 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8924\" (UniqueName: \"kubernetes.io/projected/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-kube-api-access-k8924\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.321045 master-0 kubenswrapper[19803]: I0313 01:31:19.320876 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321045 master-0 kubenswrapper[19803]: I0313 01:31:19.320906 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.321045 master-0 kubenswrapper[19803]: I0313 01:31:19.320942 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-oauth-config\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.321045 master-0 kubenswrapper[19803]: I0313 01:31:19.320967 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-service-ca\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.321045 master-0 kubenswrapper[19803]: I0313 01:31:19.321000 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.321045 master-0 kubenswrapper[19803]: I0313 01:31:19.321036 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-error\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321081 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321121 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321132 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f797fd7-03a8-4b62-82c2-2015dd076114-alertmanager-trusted-ca-bundle\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321148 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321242 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc84d\" (UniqueName: \"kubernetes.io/projected/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-kube-api-access-mc84d\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321272 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321298 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-client-ca\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321329 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/83d4214e-5ca9-401d-bd0c-860f02034a10-config-out\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321367 master-0 kubenswrapper[19803]: I0313 01:31:19.321354 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321387 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-audit-policies\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321390 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"thanos-prometheus-http-client-file\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-thanos-prometheus-http-client-file\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321416 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321475 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-config\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321535 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/83d4214e-5ca9-401d-bd0c-860f02034a10-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321566 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-audit-dir\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321589 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-proxy-ca-bundles\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321618 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321645 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-trusted-ca-bundle\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321682 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321706 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc7825a-7953-4bbc-908e-5a65741568e7-serving-cert\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321731 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e9feb3b-8bbd-4778-b23c-153143635880-serving-cert\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.321761 master-0 kubenswrapper[19803]: I0313 01:31:19.321759 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.322283 master-0 kubenswrapper[19803]: I0313 01:31:19.321787 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-config\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.322283 master-0 kubenswrapper[19803]: I0313 01:31:19.321828 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jjvzc\" (UniqueName: \"kubernetes.io/projected/f65326a0-0c48-4424-8269-135d5e800127-kube-api-access-jjvzc\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.322283 master-0 kubenswrapper[19803]: I0313 01:31:19.321956 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-configmap-serving-certs-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.323633 master-0 kubenswrapper[19803]: I0313 01:31:19.320165 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-router-certs\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.323633 master-0 kubenswrapper[19803]: I0313 01:31:19.320346 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-oauth-serving-cert\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.323633 master-0 kubenswrapper[19803]: I0313 01:31:19.320436 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-config\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.324375 master-0 kubenswrapper[19803]: I0313 01:31:19.324236 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-client-ca\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.324820 master-0 kubenswrapper[19803]: I0313 01:31:19.324622 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"alertmanager-main-db\" (UniqueName: \"kubernetes.io/empty-dir/9f797fd7-03a8-4b62-82c2-2015dd076114-alertmanager-main-db\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.324820 master-0 kubenswrapper[19803]: I0313 01:31:19.324722 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-trusted-ca-bundle\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.325338 master-0 kubenswrapper[19803]: I0313 01:31:19.324953 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.325894 master-0 kubenswrapper[19803]: I0313 01:31:19.325456 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-web-config\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.326185 master-0 kubenswrapper[19803]: I0313 01:31:19.326163 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-config-volume\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.326453 master-0 kubenswrapper[19803]: I0313 01:31:19.326384 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-proxy-ca-bundles\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.326453 master-0 kubenswrapper[19803]: I0313 01:31:19.326436 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.326959 master-0 kubenswrapper[19803]: I0313 01:31:19.326929 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.327105 master-0 kubenswrapper[19803]: I0313 01:31:19.327080 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-trusted-ca-bundle\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.327308 master-0 kubenswrapper[19803]: I0313 01:31:19.327281 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-session\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.327389 master-0 kubenswrapper[19803]: I0313 01:31:19.327341 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-service-ca\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.327545 master-0 kubenswrapper[19803]: I0313 01:31:19.327481 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.328201 master-0 kubenswrapper[19803]: I0313 01:31:19.328162 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-kube-rbac-proxy-web\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.328540 master-0 kubenswrapper[19803]: I0313 01:31:19.328489 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-audit-policies\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.328637 master-0 kubenswrapper[19803]: I0313 01:31:19.328595 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-audit-dir\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.328883 master-0 kubenswrapper[19803]: I0313 01:31:19.328853 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/83d4214e-5ca9-401d-bd0c-860f02034a10-config-out\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.331600 master-0 kubenswrapper[19803]: I0313 01:31:19.331244 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2dc7825a-7953-4bbc-908e-5a65741568e7-config\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.333059 master-0 kubenswrapper[19803]: I0313 01:31:19.332617 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-service-ca\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.333739 master-0 kubenswrapper[19803]: I0313 01:31:19.333708 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-prometheus-k8s-thanos-sidecar-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-prometheus-k8s-thanos-sidecar-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.334209 master-0 kubenswrapper[19803]: I0313 01:31:19.334165 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-grpc-tls\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-grpc-tls\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.334881 master-0 kubenswrapper[19803]: I0313 01:31:19.334853 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-oauth-config\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.338465 master-0 kubenswrapper[19803]: I0313 01:31:19.338396 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-web-config\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.340792 master-0 kubenswrapper[19803]: I0313 01:31:19.340411 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-login\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.340792 master-0 kubenswrapper[19803]: I0313 01:31:19.340726 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvlpf\" (UniqueName: \"kubernetes.io/projected/83d4214e-5ca9-401d-bd0c-860f02034a10-kube-api-access-zvlpf\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.340792 master-0 kubenswrapper[19803]: I0313 01:31:19.340739 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/83d4214e-5ca9-401d-bd0c-860f02034a10-tls-assets\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.341415 master-0 kubenswrapper[19803]: I0313 01:31:19.341376 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-serving-cert\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.342715 master-0 kubenswrapper[19803]: I0313 01:31:19.342676 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-oauth-config\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.346167 master-0 kubenswrapper[19803]: I0313 01:31:19.346121 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-k8s-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/83d4214e-5ca9-401d-bd0c-860f02034a10-prometheus-k8s-rulefiles-0\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.347443 master-0 kubenswrapper[19803]: I0313 01:31:19.347389 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-metrics-client-certs\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-metrics-client-certs\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.348310 master-0 kubenswrapper[19803]: I0313 01:31:19.348220 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc84d\" (UniqueName: \"kubernetes.io/projected/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-kube-api-access-mc84d\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.349303 master-0 kubenswrapper[19803]: I0313 01:31:19.349243 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5c6c\" (UniqueName: \"kubernetes.io/projected/1e9feb3b-8bbd-4778-b23c-153143635880-kube-api-access-b5c6c\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.350485 master-0 kubenswrapper[19803]: I0313 01:31:19.350384 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-node-tuning-operator"/"kube-root-ca.crt" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.352087 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.352665 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-config\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.352691 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nvzt4\" (UniqueName: \"kubernetes.io/projected/2dc7825a-7953-4bbc-908e-5a65741568e7-kube-api-access-nvzt4\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.352858 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-kube-rbac-proxy\" (UniqueName: \"kubernetes.io/secret/83d4214e-5ca9-401d-bd0c-860f02034a10-secret-kube-rbac-proxy\") pod \"prometheus-k8s-0\" (UID: \"83d4214e-5ca9-401d-bd0c-860f02034a10\") " pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.352855 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2dc7825a-7953-4bbc-908e-5a65741568e7-serving-cert\") pod \"controller-manager-5bc647b4dd-x4f22\" (UID: \"2dc7825a-7953-4bbc-908e-5a65741568e7\") " pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.352919 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-serving-cert\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.353023 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1e9feb3b-8bbd-4778-b23c-153143635880-serving-cert\") pod \"route-controller-manager-fc468456-6rjcb\" (UID: \"1e9feb3b-8bbd-4778-b23c-153143635880\") " pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.353223 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf-v4-0-config-user-template-error\") pod \"oauth-openshift-575f7bbb59-ntckb\" (UID: \"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf\") " pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.353283 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-metric\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy-metric\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.353428 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-main-tls\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-main-tls\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.353980 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-alertmanager-kube-rbac-proxy-web\" (UniqueName: \"kubernetes.io/secret/9f797fd7-03a8-4b62-82c2-2015dd076114-secret-alertmanager-kube-rbac-proxy-web\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.355003 master-0 kubenswrapper[19803]: I0313 01:31:19.354971 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8gk8\" (UniqueName: \"kubernetes.io/projected/9f797fd7-03a8-4b62-82c2-2015dd076114-kube-api-access-m8gk8\") pod \"alertmanager-main-0\" (UID: \"9f797fd7-03a8-4b62-82c2-2015dd076114\") " pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.356396 master-0 kubenswrapper[19803]: I0313 01:31:19.356230 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jjvzc\" (UniqueName: \"kubernetes.io/projected/f65326a0-0c48-4424-8269-135d5e800127-kube-api-access-jjvzc\") pod \"console-5c44cb5779-v77m6\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.360656 master-0 kubenswrapper[19803]: I0313 01:31:19.360003 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8924\" (UniqueName: \"kubernetes.io/projected/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-kube-api-access-k8924\") pod \"console-5cd7664db7-4ljbn\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.360656 master-0 kubenswrapper[19803]: I0313 01:31:19.360462 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:19.372300 master-0 kubenswrapper[19803]: I0313 01:31:19.372261 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/alertmanager-main-0" Mar 13 01:31:19.403861 master-0 kubenswrapper[19803]: I0313 01:31:19.403811 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:19.430410 master-0 kubenswrapper[19803]: I0313 01:31:19.430285 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:19.454954 master-0 kubenswrapper[19803]: I0313 01:31:19.453211 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:19.471316 master-0 kubenswrapper[19803]: I0313 01:31:19.471003 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-9krk7" Mar 13 01:31:19.475413 master-0 kubenswrapper[19803]: I0313 01:31:19.473529 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:19.494287 master-0 kubenswrapper[19803]: I0313 01:31:19.492791 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:19.779323 master-0 kubenswrapper[19803]: I0313 01:31:19.779251 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5c44cb5779-v77m6"] Mar 13 01:31:19.794150 master-0 kubenswrapper[19803]: W0313 01:31:19.794023 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf65326a0_0c48_4424_8269_135d5e800127.slice/crio-1e3be30075a37b362b864634dab8d14957f3c670030a0d37307e9c1d4df24c0b WatchSource:0}: Error finding container 1e3be30075a37b362b864634dab8d14957f3c670030a0d37307e9c1d4df24c0b: Status 404 returned error can't find the container with id 1e3be30075a37b362b864634dab8d14957f3c670030a0d37307e9c1d4df24c0b Mar 13 01:31:19.888073 master-0 kubenswrapper[19803]: I0313 01:31:19.887751 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Mar 13 01:31:19.920791 master-0 kubenswrapper[19803]: I0313 01:31:19.920553 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Mar 13 01:31:19.935808 master-0 kubenswrapper[19803]: I0313 01:31:19.935650 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/prometheus-k8s-0"] Mar 13 01:31:19.938442 master-0 kubenswrapper[19803]: W0313 01:31:19.938359 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f797fd7_03a8_4b62_82c2_2015dd076114.slice/crio-6f2c7d0022c69fab0a5e296a0038c7b6ef7f547780f426897bdc2f1958819b88 WatchSource:0}: Error finding container 6f2c7d0022c69fab0a5e296a0038c7b6ef7f547780f426897bdc2f1958819b88: Status 404 returned error can't find the container with id 6f2c7d0022c69fab0a5e296a0038c7b6ef7f547780f426897bdc2f1958819b88 Mar 13 01:31:19.943299 master-0 kubenswrapper[19803]: W0313 01:31:19.943255 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83d4214e_5ca9_401d_bd0c_860f02034a10.slice/crio-b81810b3f9db19613bcfa9356d505b719faf405b080c4c5d2213ecbb4d2950f2 WatchSource:0}: Error finding container b81810b3f9db19613bcfa9356d505b719faf405b080c4c5d2213ecbb4d2950f2: Status 404 returned error can't find the container with id b81810b3f9db19613bcfa9356d505b719faf405b080c4c5d2213ecbb4d2950f2 Mar 13 01:31:19.945909 master-0 kubenswrapper[19803]: I0313 01:31:19.945884 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/alertmanager-main-0"] Mar 13 01:31:20.068915 master-0 kubenswrapper[19803]: I0313 01:31:20.068823 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5cd7664db7-4ljbn"] Mar 13 01:31:20.084261 master-0 kubenswrapper[19803]: I0313 01:31:20.084212 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"metrics-client-ca" Mar 13 01:31:20.112814 master-0 kubenswrapper[19803]: I0313 01:31:20.112741 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-5bc647b4dd-x4f22"] Mar 13 01:31:20.117102 master-0 kubenswrapper[19803]: W0313 01:31:20.116916 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2dc7825a_7953_4bbc_908e_5a65741568e7.slice/crio-55655d200ab3a93d1e1e1acaeec4456ebd90970a70b0e1c26c6032077bab37c5 WatchSource:0}: Error finding container 55655d200ab3a93d1e1e1acaeec4456ebd90970a70b0e1c26c6032077bab37c5: Status 404 returned error can't find the container with id 55655d200ab3a93d1e1e1acaeec4456ebd90970a70b0e1c26c6032077bab37c5 Mar 13 01:31:20.135433 master-0 kubenswrapper[19803]: I0313 01:31:20.135382 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575f7bbb59-ntckb"] Mar 13 01:31:20.151606 master-0 kubenswrapper[19803]: I0313 01:31:20.150368 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb"] Mar 13 01:31:20.165923 master-0 kubenswrapper[19803]: W0313 01:31:20.165870 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod364e6da6_2cb4_48aa_b2b9_e4ed87bc90bf.slice/crio-40a654278bf74746523d69d0e52c39a5f433eeb3dff82b62647a3609b9c14c95 WatchSource:0}: Error finding container 40a654278bf74746523d69d0e52c39a5f433eeb3dff82b62647a3609b9c14c95: Status 404 returned error can't find the container with id 40a654278bf74746523d69d0e52c39a5f433eeb3dff82b62647a3609b9c14c95 Mar 13 01:31:20.326820 master-0 kubenswrapper[19803]: I0313 01:31:20.326703 19803 generic.go:334] "Generic (PLEG): container finished" podID="9f797fd7-03a8-4b62-82c2-2015dd076114" containerID="705844a26e68112fd426932955dbc382b79acab40b29ffeeb33ee7721fd15ccf" exitCode=0 Mar 13 01:31:20.330361 master-0 kubenswrapper[19803]: I0313 01:31:20.330224 19803 generic.go:334] "Generic (PLEG): container finished" podID="83d4214e-5ca9-401d-bd0c-860f02034a10" containerID="67de0332710f16978bcf7373b9f113b926f75cb0c61b11bfbb4a553a666e8660" exitCode=0 Mar 13 01:31:20.333612 master-0 kubenswrapper[19803]: I0313 01:31:20.333461 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c44cb5779-v77m6" event={"ID":"f65326a0-0c48-4424-8269-135d5e800127","Type":"ContainerStarted","Data":"1e3be30075a37b362b864634dab8d14957f3c670030a0d37307e9c1d4df24c0b"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333617 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cd7664db7-4ljbn" event={"ID":"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b","Type":"ContainerStarted","Data":"ff72e194cfed7b4d6896effc625ddce61ec30fea3948172abb83b79b2d88ad8e"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333658 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" event={"ID":"1e9feb3b-8bbd-4778-b23c-153143635880","Type":"ContainerStarted","Data":"cb4827c41062127013d14613c245bdddf863bae4205665d33201dc727031e41e"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333685 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" event={"ID":"2dc7825a-7953-4bbc-908e-5a65741568e7","Type":"ContainerStarted","Data":"55655d200ab3a93d1e1e1acaeec4456ebd90970a70b0e1c26c6032077bab37c5"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333706 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" event={"ID":"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf","Type":"ContainerStarted","Data":"40a654278bf74746523d69d0e52c39a5f433eeb3dff82b62647a3609b9c14c95"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333727 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerDied","Data":"705844a26e68112fd426932955dbc382b79acab40b29ffeeb33ee7721fd15ccf"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333751 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerStarted","Data":"6f2c7d0022c69fab0a5e296a0038c7b6ef7f547780f426897bdc2f1958819b88"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333769 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerDied","Data":"67de0332710f16978bcf7373b9f113b926f75cb0c61b11bfbb4a553a666e8660"} Mar 13 01:31:20.333789 master-0 kubenswrapper[19803]: I0313 01:31:20.333789 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerStarted","Data":"b81810b3f9db19613bcfa9356d505b719faf405b080c4c5d2213ecbb4d2950f2"} Mar 13 01:31:20.606431 master-0 kubenswrapper[19803]: I0313 01:31:20.606383 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"thanos-querier-kube-rbac-proxy-web" Mar 13 01:31:20.663847 master-0 kubenswrapper[19803]: I0313 01:31:20.663539 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Mar 13 01:31:21.013388 master-0 kubenswrapper[19803]: I0313 01:31:21.010877 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Mar 13 01:31:21.168534 master-0 kubenswrapper[19803]: I0313 01:31:21.165845 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Mar 13 01:31:21.181080 master-0 kubenswrapper[19803]: I0313 01:31:21.178318 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-bbmgf" Mar 13 01:31:21.340796 master-0 kubenswrapper[19803]: I0313 01:31:21.340708 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" event={"ID":"1e9feb3b-8bbd-4778-b23c-153143635880","Type":"ContainerStarted","Data":"4aa1cab1bdea7192e1e652adecf64facbc1fbb97036f51dd6c228bbda006ea17"} Mar 13 01:31:21.341270 master-0 kubenswrapper[19803]: I0313 01:31:21.341212 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:21.344305 master-0 kubenswrapper[19803]: I0313 01:31:21.344230 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" event={"ID":"2dc7825a-7953-4bbc-908e-5a65741568e7","Type":"ContainerStarted","Data":"e695ad749343cb1fe8b5d63da1bcb7f4b08499993377ff98539684236af17bc4"} Mar 13 01:31:21.358554 master-0 kubenswrapper[19803]: I0313 01:31:21.346106 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:21.358554 master-0 kubenswrapper[19803]: I0313 01:31:21.352339 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" Mar 13 01:31:21.358715 master-0 kubenswrapper[19803]: I0313 01:31:21.358587 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerStarted","Data":"b3fc66fa6c139558c0d28bcffabeee4ca84f5a30c63c7bc08fc27decaa29b21e"} Mar 13 01:31:21.358715 master-0 kubenswrapper[19803]: I0313 01:31:21.358658 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerStarted","Data":"c6cfa7af616482d067e3fb1f772a459911e905f5d94bc989c910494035511d70"} Mar 13 01:31:21.358715 master-0 kubenswrapper[19803]: I0313 01:31:21.358692 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerStarted","Data":"f5d5631ae3a67d95accdcc85d2376ec840bc8d64d260adc26e1d6b304af32dd8"} Mar 13 01:31:21.370344 master-0 kubenswrapper[19803]: I0313 01:31:21.369039 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" Mar 13 01:31:21.377587 master-0 kubenswrapper[19803]: I0313 01:31:21.375832 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-fc468456-6rjcb" podStartSLOduration=203.375788114 podStartE2EDuration="3m23.375788114s" podCreationTimestamp="2026-03-13 01:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:31:21.372423412 +0000 UTC m=+829.337571101" watchObservedRunningTime="2026-03-13 01:31:21.375788114 +0000 UTC m=+829.340935793" Mar 13 01:31:21.377587 master-0 kubenswrapper[19803]: I0313 01:31:21.376574 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerStarted","Data":"6fbceb7e3e0c10db13048bcf214fc275861f8ca8f494e3e7cbf5e15fa8e465c4"} Mar 13 01:31:21.377587 master-0 kubenswrapper[19803]: I0313 01:31:21.376695 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerStarted","Data":"c57ecd296b6b3903bbf65eb2b9b9f792a24f701d307be56152bc63877e7425e9"} Mar 13 01:31:21.377587 master-0 kubenswrapper[19803]: I0313 01:31:21.376732 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerStarted","Data":"0707be0aa73e13dd0cce76c68ff865a6432bd395e8fc6b22545941359e5285f3"} Mar 13 01:31:21.444983 master-0 kubenswrapper[19803]: I0313 01:31:21.444937 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-catalogd"/"openshift-service-ca.crt" Mar 13 01:31:21.782623 master-0 kubenswrapper[19803]: I0313 01:31:21.782400 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Mar 13 01:31:22.346971 master-0 kubenswrapper[19803]: I0313 01:31:22.345933 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-5bc647b4dd-x4f22" podStartSLOduration=204.345907014 podStartE2EDuration="3m24.345907014s" podCreationTimestamp="2026-03-13 01:27:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:31:21.445702612 +0000 UTC m=+829.410850301" watchObservedRunningTime="2026-03-13 01:31:22.345907014 +0000 UTC m=+830.311054693" Mar 13 01:31:22.390552 master-0 kubenswrapper[19803]: I0313 01:31:22.390421 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerStarted","Data":"cbbb599f67781cd207fc6c2b31186930cb7385e0b3312b47b4427e9882c9f50d"} Mar 13 01:31:22.390552 master-0 kubenswrapper[19803]: I0313 01:31:22.390478 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerStarted","Data":"49cdaec5a1de364ad0300ec280ffe974064d04f4a7a8b285adee8c51ccb2ab7d"} Mar 13 01:31:22.390552 master-0 kubenswrapper[19803]: I0313 01:31:22.390489 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/prometheus-k8s-0" event={"ID":"83d4214e-5ca9-401d-bd0c-860f02034a10","Type":"ContainerStarted","Data":"248dde16a3bed4de03a8e6cb754b69db18a0d7368916c5418191f1362cd5aa41"} Mar 13 01:31:22.397184 master-0 kubenswrapper[19803]: I0313 01:31:22.396995 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerStarted","Data":"bfff1c4030ffe40225debf338f647bd597377ffa9ea9815d4ac72d9c2e6636c1"} Mar 13 01:31:22.397184 master-0 kubenswrapper[19803]: I0313 01:31:22.397030 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerStarted","Data":"a3e3ea6b2ee7e874f3e146c80c13a1dbbd4abbcac11edbc5e220b1e3cc454c63"} Mar 13 01:31:22.397184 master-0 kubenswrapper[19803]: I0313 01:31:22.397039 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/alertmanager-main-0" event={"ID":"9f797fd7-03a8-4b62-82c2-2015dd076114","Type":"ContainerStarted","Data":"a5079c9b5c71c085a56807f94b471284089e4327137d08b616e5e6d6eacd0f04"} Mar 13 01:31:22.443672 master-0 kubenswrapper[19803]: I0313 01:31:22.443574 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/alertmanager-main-0" podStartSLOduration=240.443549385 podStartE2EDuration="4m0.443549385s" podCreationTimestamp="2026-03-13 01:27:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:31:22.437809696 +0000 UTC m=+830.402957385" watchObservedRunningTime="2026-03-13 01:31:22.443549385 +0000 UTC m=+830.408697084" Mar 13 01:31:22.484837 master-0 kubenswrapper[19803]: I0313 01:31:22.484782 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"node-exporter-dockercfg-75ldw" Mar 13 01:31:22.485425 master-0 kubenswrapper[19803]: I0313 01:31:22.484805 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/prometheus-k8s-0" podStartSLOduration=236.484775576 podStartE2EDuration="3m56.484775576s" podCreationTimestamp="2026-03-13 01:27:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:31:22.479764985 +0000 UTC m=+830.444912664" watchObservedRunningTime="2026-03-13 01:31:22.484775576 +0000 UTC m=+830.449923255" Mar 13 01:31:22.560597 master-0 kubenswrapper[19803]: I0313 01:31:22.560539 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Mar 13 01:31:24.404635 master-0 kubenswrapper[19803]: I0313 01:31:24.404550 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:31:25.426587 master-0 kubenswrapper[19803]: I0313 01:31:25.426359 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cd7664db7-4ljbn" event={"ID":"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b","Type":"ContainerStarted","Data":"361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e"} Mar 13 01:31:25.429860 master-0 kubenswrapper[19803]: I0313 01:31:25.429813 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" event={"ID":"364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf","Type":"ContainerStarted","Data":"97a948e3ba26e0c6f18fe3015a2dc817db9e83e40de873d1fd2ea85d01db76b3"} Mar 13 01:31:25.430743 master-0 kubenswrapper[19803]: I0313 01:31:25.430694 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:25.434612 master-0 kubenswrapper[19803]: I0313 01:31:25.434499 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c44cb5779-v77m6" event={"ID":"f65326a0-0c48-4424-8269-135d5e800127","Type":"ContainerStarted","Data":"d57729b33d5f5a53186e5d06cef62d7bbbbc986d18bc1464921faed040faff16"} Mar 13 01:31:25.462768 master-0 kubenswrapper[19803]: I0313 01:31:25.462655 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5cd7664db7-4ljbn" podStartSLOduration=233.65712281 podStartE2EDuration="3m58.462625375s" podCreationTimestamp="2026-03-13 01:27:27 +0000 UTC" firstStartedPulling="2026-03-13 01:31:20.076617672 +0000 UTC m=+828.041765361" lastFinishedPulling="2026-03-13 01:31:24.882120247 +0000 UTC m=+832.847267926" observedRunningTime="2026-03-13 01:31:25.45458614 +0000 UTC m=+833.419733829" watchObservedRunningTime="2026-03-13 01:31:25.462625375 +0000 UTC m=+833.427773064" Mar 13 01:31:25.498808 master-0 kubenswrapper[19803]: I0313 01:31:25.498664 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5c44cb5779-v77m6" podStartSLOduration=255.409210351 podStartE2EDuration="4m20.49860768s" podCreationTimestamp="2026-03-13 01:27:05 +0000 UTC" firstStartedPulling="2026-03-13 01:31:19.797152386 +0000 UTC m=+827.762300065" lastFinishedPulling="2026-03-13 01:31:24.886549685 +0000 UTC m=+832.851697394" observedRunningTime="2026-03-13 01:31:25.484691912 +0000 UTC m=+833.449839621" watchObservedRunningTime="2026-03-13 01:31:25.49860768 +0000 UTC m=+833.463755409" Mar 13 01:31:25.531102 master-0 kubenswrapper[19803]: I0313 01:31:25.530969 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" podStartSLOduration=102.834122999 podStartE2EDuration="1m47.530930754s" podCreationTimestamp="2026-03-13 01:29:38 +0000 UTC" firstStartedPulling="2026-03-13 01:31:20.166597138 +0000 UTC m=+828.131744827" lastFinishedPulling="2026-03-13 01:31:24.863404903 +0000 UTC m=+832.828552582" observedRunningTime="2026-03-13 01:31:25.515413698 +0000 UTC m=+833.480561387" watchObservedRunningTime="2026-03-13 01:31:25.530930754 +0000 UTC m=+833.496078443" Mar 13 01:31:25.778688 master-0 kubenswrapper[19803]: I0313 01:31:25.778118 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-575f7bbb59-ntckb" Mar 13 01:31:28.077028 master-0 kubenswrapper[19803]: I0313 01:31:28.076939 19803 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0"] Mar 13 01:31:28.078741 master-0 kubenswrapper[19803]: I0313 01:31:28.078645 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" containerID="cri-o://6a0ea16b4eddf0e26b82f667b75df39ac5650bd6ba07f40d5048e4ffe6bf4805" gracePeriod=5 Mar 13 01:31:29.361458 master-0 kubenswrapper[19803]: I0313 01:31:29.361359 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:29.362048 master-0 kubenswrapper[19803]: I0313 01:31:29.361496 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:29.363621 master-0 kubenswrapper[19803]: I0313 01:31:29.363490 19803 patch_prober.go:28] interesting pod/console-5c44cb5779-v77m6 container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" start-of-body= Mar 13 01:31:29.363712 master-0 kubenswrapper[19803]: I0313 01:31:29.363664 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5c44cb5779-v77m6" podUID="f65326a0-0c48-4424-8269-135d5e800127" containerName="console" probeResult="failure" output="Get \"https://10.128.0.101:8443/health\": dial tcp 10.128.0.101:8443: connect: connection refused" Mar 13 01:31:29.431252 master-0 kubenswrapper[19803]: I0313 01:31:29.431141 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:29.431438 master-0 kubenswrapper[19803]: I0313 01:31:29.431276 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:29.434196 master-0 kubenswrapper[19803]: I0313 01:31:29.434142 19803 patch_prober.go:28] interesting pod/console-5cd7664db7-4ljbn container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" start-of-body= Mar 13 01:31:29.434278 master-0 kubenswrapper[19803]: I0313 01:31:29.434215 19803 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-5cd7664db7-4ljbn" podUID="e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" containerName="console" probeResult="failure" output="Get \"https://10.128.0.96:8443/health\": dial tcp 10.128.0.96:8443: connect: connection refused" Mar 13 01:31:33.523858 master-0 kubenswrapper[19803]: I0313 01:31:33.523764 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 01:31:33.524877 master-0 kubenswrapper[19803]: I0313 01:31:33.523876 19803 generic.go:334] "Generic (PLEG): container finished" podID="899242a15b2bdf3b4a04fb323647ca94" containerID="6a0ea16b4eddf0e26b82f667b75df39ac5650bd6ba07f40d5048e4ffe6bf4805" exitCode=137 Mar 13 01:31:33.691314 master-0 kubenswrapper[19803]: I0313 01:31:33.691222 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 01:31:33.691626 master-0 kubenswrapper[19803]: I0313 01:31:33.691384 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:31:33.733812 master-0 kubenswrapper[19803]: I0313 01:31:33.733699 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 01:31:33.733986 master-0 kubenswrapper[19803]: I0313 01:31:33.733939 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 01:31:33.734133 master-0 kubenswrapper[19803]: I0313 01:31:33.734079 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 01:31:33.734133 master-0 kubenswrapper[19803]: I0313 01:31:33.734086 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock" (OuterVolumeSpecName: "var-lock") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:31:33.734282 master-0 kubenswrapper[19803]: I0313 01:31:33.734169 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 01:31:33.734282 master-0 kubenswrapper[19803]: I0313 01:31:33.734237 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") pod \"899242a15b2bdf3b4a04fb323647ca94\" (UID: \"899242a15b2bdf3b4a04fb323647ca94\") " Mar 13 01:31:33.734282 master-0 kubenswrapper[19803]: I0313 01:31:33.734229 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:31:33.734473 master-0 kubenswrapper[19803]: I0313 01:31:33.734345 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log" (OuterVolumeSpecName: "var-log") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:31:33.734615 master-0 kubenswrapper[19803]: I0313 01:31:33.734506 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests" (OuterVolumeSpecName: "manifests") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:31:33.735226 master-0 kubenswrapper[19803]: I0313 01:31:33.735179 19803 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-lock\") on node \"master-0\" DevicePath \"\"" Mar 13 01:31:33.735226 master-0 kubenswrapper[19803]: I0313 01:31:33.735215 19803 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:31:33.735380 master-0 kubenswrapper[19803]: I0313 01:31:33.735241 19803 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-var-log\") on node \"master-0\" DevicePath \"\"" Mar 13 01:31:33.735380 master-0 kubenswrapper[19803]: I0313 01:31:33.735261 19803 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-manifests\") on node \"master-0\" DevicePath \"\"" Mar 13 01:31:33.743577 master-0 kubenswrapper[19803]: I0313 01:31:33.743474 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "899242a15b2bdf3b4a04fb323647ca94" (UID: "899242a15b2bdf3b4a04fb323647ca94"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Mar 13 01:31:33.840768 master-0 kubenswrapper[19803]: I0313 01:31:33.837083 19803 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/899242a15b2bdf3b4a04fb323647ca94-pod-resource-dir\") on node \"master-0\" DevicePath \"\"" Mar 13 01:31:34.330785 master-0 kubenswrapper[19803]: I0313 01:31:34.330624 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="899242a15b2bdf3b4a04fb323647ca94" path="/var/lib/kubelet/pods/899242a15b2bdf3b4a04fb323647ca94/volumes" Mar 13 01:31:34.540703 master-0 kubenswrapper[19803]: I0313 01:31:34.540597 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-master-0_899242a15b2bdf3b4a04fb323647ca94/startup-monitor/0.log" Mar 13 01:31:34.541859 master-0 kubenswrapper[19803]: I0313 01:31:34.540737 19803 scope.go:117] "RemoveContainer" containerID="6a0ea16b4eddf0e26b82f667b75df39ac5650bd6ba07f40d5048e4ffe6bf4805" Mar 13 01:31:34.541859 master-0 kubenswrapper[19803]: I0313 01:31:34.540944 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-master-0" Mar 13 01:31:37.950188 master-0 kubenswrapper[19803]: I0313 01:31:37.950032 19803 scope.go:117] "RemoveContainer" containerID="8758f285d02298f3f87cf8a95d69a9b9fc7adb315bfb680293d79f27940394d1" Mar 13 01:31:39.057839 master-0 kubenswrapper[19803]: I0313 01:31:39.057766 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-2sc42"] Mar 13 01:31:39.058480 master-0 kubenswrapper[19803]: E0313 01:31:39.058204 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 01:31:39.058480 master-0 kubenswrapper[19803]: I0313 01:31:39.058229 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 01:31:39.058480 master-0 kubenswrapper[19803]: I0313 01:31:39.058424 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="899242a15b2bdf3b4a04fb323647ca94" containerName="startup-monitor" Mar 13 01:31:39.059262 master-0 kubenswrapper[19803]: I0313 01:31:39.059241 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.062842 master-0 kubenswrapper[19803]: I0313 01:31:39.062783 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Mar 13 01:31:39.063157 master-0 kubenswrapper[19803]: I0313 01:31:39.063129 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Mar 13 01:31:39.065292 master-0 kubenswrapper[19803]: I0313 01:31:39.065243 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"default-dockercfg-4rgn2" Mar 13 01:31:39.077827 master-0 kubenswrapper[19803]: I0313 01:31:39.077770 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-2sc42"] Mar 13 01:31:39.140952 master-0 kubenswrapper[19803]: I0313 01:31:39.140870 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/deda0241-2f3f-48ca-b2d1-f0c0287e258e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-2sc42\" (UID: \"deda0241-2f3f-48ca-b2d1-f0c0287e258e\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.141234 master-0 kubenswrapper[19803]: I0313 01:31:39.140988 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/deda0241-2f3f-48ca-b2d1-f0c0287e258e-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-2sc42\" (UID: \"deda0241-2f3f-48ca-b2d1-f0c0287e258e\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.243085 master-0 kubenswrapper[19803]: I0313 01:31:39.242961 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/deda0241-2f3f-48ca-b2d1-f0c0287e258e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-2sc42\" (UID: \"deda0241-2f3f-48ca-b2d1-f0c0287e258e\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.243085 master-0 kubenswrapper[19803]: I0313 01:31:39.243077 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/deda0241-2f3f-48ca-b2d1-f0c0287e258e-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-2sc42\" (UID: \"deda0241-2f3f-48ca-b2d1-f0c0287e258e\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.244153 master-0 kubenswrapper[19803]: I0313 01:31:39.244111 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/deda0241-2f3f-48ca-b2d1-f0c0287e258e-nginx-conf\") pod \"networking-console-plugin-5cbd49d755-2sc42\" (UID: \"deda0241-2f3f-48ca-b2d1-f0c0287e258e\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.248702 master-0 kubenswrapper[19803]: I0313 01:31:39.248651 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/deda0241-2f3f-48ca-b2d1-f0c0287e258e-networking-console-plugin-cert\") pod \"networking-console-plugin-5cbd49d755-2sc42\" (UID: \"deda0241-2f3f-48ca-b2d1-f0c0287e258e\") " pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.369008 master-0 kubenswrapper[19803]: I0313 01:31:39.368831 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:39.377127 master-0 kubenswrapper[19803]: I0313 01:31:39.377064 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:31:39.396848 master-0 kubenswrapper[19803]: I0313 01:31:39.396764 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" Mar 13 01:31:39.451560 master-0 kubenswrapper[19803]: I0313 01:31:39.448174 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:39.469572 master-0 kubenswrapper[19803]: I0313 01:31:39.469474 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:31:39.627578 master-0 kubenswrapper[19803]: I0313 01:31:39.627451 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c44cb5779-v77m6"] Mar 13 01:31:39.977168 master-0 kubenswrapper[19803]: I0313 01:31:39.977081 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-network-console/networking-console-plugin-5cbd49d755-2sc42"] Mar 13 01:31:40.622375 master-0 kubenswrapper[19803]: I0313 01:31:40.622274 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" event={"ID":"deda0241-2f3f-48ca-b2d1-f0c0287e258e","Type":"ContainerStarted","Data":"8e9d0912fee7461d39943699bc03a4caea627904aea80bbc7316bcc603dadc8f"} Mar 13 01:31:41.637990 master-0 kubenswrapper[19803]: I0313 01:31:41.637908 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" event={"ID":"deda0241-2f3f-48ca-b2d1-f0c0287e258e","Type":"ContainerStarted","Data":"3917cab3fe5a2cd2e6b28803db6e536815b79553ae0491969c57f255590484b8"} Mar 13 01:31:41.674302 master-0 kubenswrapper[19803]: I0313 01:31:41.673992 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-network-console/networking-console-plugin-5cbd49d755-2sc42" podStartSLOduration=1.254749819 podStartE2EDuration="2.673967086s" podCreationTimestamp="2026-03-13 01:31:39 +0000 UTC" firstStartedPulling="2026-03-13 01:31:39.982060008 +0000 UTC m=+847.947207727" lastFinishedPulling="2026-03-13 01:31:41.401277305 +0000 UTC m=+849.366424994" observedRunningTime="2026-03-13 01:31:41.664944838 +0000 UTC m=+849.630092557" watchObservedRunningTime="2026-03-13 01:31:41.673967086 +0000 UTC m=+849.639114775" Mar 13 01:32:05.675742 master-0 kubenswrapper[19803]: I0313 01:32:05.675627 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5c44cb5779-v77m6" podUID="f65326a0-0c48-4424-8269-135d5e800127" containerName="console" containerID="cri-o://d57729b33d5f5a53186e5d06cef62d7bbbbc986d18bc1464921faed040faff16" gracePeriod=15 Mar 13 01:32:05.892706 master-0 kubenswrapper[19803]: I0313 01:32:05.892651 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c44cb5779-v77m6_f65326a0-0c48-4424-8269-135d5e800127/console/0.log" Mar 13 01:32:05.892706 master-0 kubenswrapper[19803]: I0313 01:32:05.892705 19803 generic.go:334] "Generic (PLEG): container finished" podID="f65326a0-0c48-4424-8269-135d5e800127" containerID="d57729b33d5f5a53186e5d06cef62d7bbbbc986d18bc1464921faed040faff16" exitCode=2 Mar 13 01:32:05.893051 master-0 kubenswrapper[19803]: I0313 01:32:05.892741 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c44cb5779-v77m6" event={"ID":"f65326a0-0c48-4424-8269-135d5e800127","Type":"ContainerDied","Data":"d57729b33d5f5a53186e5d06cef62d7bbbbc986d18bc1464921faed040faff16"} Mar 13 01:32:06.189219 master-0 kubenswrapper[19803]: I0313 01:32:06.189162 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c44cb5779-v77m6_f65326a0-0c48-4424-8269-135d5e800127/console/0.log" Mar 13 01:32:06.189374 master-0 kubenswrapper[19803]: I0313 01:32:06.189283 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:32:06.272372 master-0 kubenswrapper[19803]: I0313 01:32:06.272249 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-trusted-ca-bundle\") pod \"f65326a0-0c48-4424-8269-135d5e800127\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " Mar 13 01:32:06.272372 master-0 kubenswrapper[19803]: I0313 01:32:06.272377 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jjvzc\" (UniqueName: \"kubernetes.io/projected/f65326a0-0c48-4424-8269-135d5e800127-kube-api-access-jjvzc\") pod \"f65326a0-0c48-4424-8269-135d5e800127\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " Mar 13 01:32:06.272950 master-0 kubenswrapper[19803]: I0313 01:32:06.272554 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-serving-cert\") pod \"f65326a0-0c48-4424-8269-135d5e800127\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " Mar 13 01:32:06.272950 master-0 kubenswrapper[19803]: I0313 01:32:06.272660 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-oauth-config\") pod \"f65326a0-0c48-4424-8269-135d5e800127\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " Mar 13 01:32:06.272950 master-0 kubenswrapper[19803]: I0313 01:32:06.272773 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-service-ca\") pod \"f65326a0-0c48-4424-8269-135d5e800127\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " Mar 13 01:32:06.273371 master-0 kubenswrapper[19803]: I0313 01:32:06.273260 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f65326a0-0c48-4424-8269-135d5e800127" (UID: "f65326a0-0c48-4424-8269-135d5e800127"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:32:06.273677 master-0 kubenswrapper[19803]: I0313 01:32:06.273616 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-service-ca" (OuterVolumeSpecName: "service-ca") pod "f65326a0-0c48-4424-8269-135d5e800127" (UID: "f65326a0-0c48-4424-8269-135d5e800127"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:32:06.274012 master-0 kubenswrapper[19803]: I0313 01:32:06.273941 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-console-config\") pod \"f65326a0-0c48-4424-8269-135d5e800127\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " Mar 13 01:32:06.274304 master-0 kubenswrapper[19803]: I0313 01:32:06.274273 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-oauth-serving-cert\") pod \"f65326a0-0c48-4424-8269-135d5e800127\" (UID: \"f65326a0-0c48-4424-8269-135d5e800127\") " Mar 13 01:32:06.274473 master-0 kubenswrapper[19803]: I0313 01:32:06.274433 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-console-config" (OuterVolumeSpecName: "console-config") pod "f65326a0-0c48-4424-8269-135d5e800127" (UID: "f65326a0-0c48-4424-8269-135d5e800127"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:32:06.274735 master-0 kubenswrapper[19803]: I0313 01:32:06.274699 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "f65326a0-0c48-4424-8269-135d5e800127" (UID: "f65326a0-0c48-4424-8269-135d5e800127"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:32:06.275710 master-0 kubenswrapper[19803]: I0313 01:32:06.275676 19803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:32:06.275879 master-0 kubenswrapper[19803]: I0313 01:32:06.275854 19803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:32:06.276029 master-0 kubenswrapper[19803]: I0313 01:32:06.276005 19803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:32:06.276188 master-0 kubenswrapper[19803]: I0313 01:32:06.276163 19803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/f65326a0-0c48-4424-8269-135d5e800127-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:32:06.276339 master-0 kubenswrapper[19803]: I0313 01:32:06.276019 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "f65326a0-0c48-4424-8269-135d5e800127" (UID: "f65326a0-0c48-4424-8269-135d5e800127"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:32:06.281655 master-0 kubenswrapper[19803]: I0313 01:32:06.278355 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65326a0-0c48-4424-8269-135d5e800127-kube-api-access-jjvzc" (OuterVolumeSpecName: "kube-api-access-jjvzc") pod "f65326a0-0c48-4424-8269-135d5e800127" (UID: "f65326a0-0c48-4424-8269-135d5e800127"). InnerVolumeSpecName "kube-api-access-jjvzc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:32:06.281655 master-0 kubenswrapper[19803]: I0313 01:32:06.278377 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "f65326a0-0c48-4424-8269-135d5e800127" (UID: "f65326a0-0c48-4424-8269-135d5e800127"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:32:06.377944 master-0 kubenswrapper[19803]: I0313 01:32:06.377874 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jjvzc\" (UniqueName: \"kubernetes.io/projected/f65326a0-0c48-4424-8269-135d5e800127-kube-api-access-jjvzc\") on node \"master-0\" DevicePath \"\"" Mar 13 01:32:06.377944 master-0 kubenswrapper[19803]: I0313 01:32:06.377933 19803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:32:06.377944 master-0 kubenswrapper[19803]: I0313 01:32:06.377946 19803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/f65326a0-0c48-4424-8269-135d5e800127-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:32:06.918652 master-0 kubenswrapper[19803]: I0313 01:32:06.913361 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5c44cb5779-v77m6_f65326a0-0c48-4424-8269-135d5e800127/console/0.log" Mar 13 01:32:06.918652 master-0 kubenswrapper[19803]: I0313 01:32:06.913456 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5c44cb5779-v77m6" event={"ID":"f65326a0-0c48-4424-8269-135d5e800127","Type":"ContainerDied","Data":"1e3be30075a37b362b864634dab8d14957f3c670030a0d37307e9c1d4df24c0b"} Mar 13 01:32:06.918652 master-0 kubenswrapper[19803]: I0313 01:32:06.913534 19803 scope.go:117] "RemoveContainer" containerID="d57729b33d5f5a53186e5d06cef62d7bbbbc986d18bc1464921faed040faff16" Mar 13 01:32:06.918652 master-0 kubenswrapper[19803]: I0313 01:32:06.913654 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5c44cb5779-v77m6" Mar 13 01:32:06.952150 master-0 kubenswrapper[19803]: I0313 01:32:06.952047 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5c44cb5779-v77m6"] Mar 13 01:32:06.960210 master-0 kubenswrapper[19803]: I0313 01:32:06.960156 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5c44cb5779-v77m6"] Mar 13 01:32:08.333352 master-0 kubenswrapper[19803]: I0313 01:32:08.333229 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65326a0-0c48-4424-8269-135d5e800127" path="/var/lib/kubelet/pods/f65326a0-0c48-4424-8269-135d5e800127/volumes" Mar 13 01:32:19.405617 master-0 kubenswrapper[19803]: I0313 01:32:19.405365 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:32:19.460662 master-0 kubenswrapper[19803]: I0313 01:32:19.460580 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:32:20.142410 master-0 kubenswrapper[19803]: I0313 01:32:20.142361 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-monitoring/prometheus-k8s-0" Mar 13 01:32:32.547451 master-0 kubenswrapper[19803]: I0313 01:32:32.547344 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-846db4bc94-fklp6"] Mar 13 01:32:32.548218 master-0 kubenswrapper[19803]: E0313 01:32:32.548152 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f65326a0-0c48-4424-8269-135d5e800127" containerName="console" Mar 13 01:32:32.548218 master-0 kubenswrapper[19803]: I0313 01:32:32.548212 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f65326a0-0c48-4424-8269-135d5e800127" containerName="console" Mar 13 01:32:32.548658 master-0 kubenswrapper[19803]: I0313 01:32:32.548616 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f65326a0-0c48-4424-8269-135d5e800127" containerName="console" Mar 13 01:32:32.549904 master-0 kubenswrapper[19803]: I0313 01:32:32.549861 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.556160 master-0 kubenswrapper[19803]: I0313 01:32:32.556093 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846db4bc94-fklp6"] Mar 13 01:32:32.681754 master-0 kubenswrapper[19803]: I0313 01:32:32.681663 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqkpr\" (UniqueName: \"kubernetes.io/projected/462eaab3-2c83-41c1-ad56-0121ee483d42-kube-api-access-dqkpr\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.681998 master-0 kubenswrapper[19803]: I0313 01:32:32.681935 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-service-ca\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.682263 master-0 kubenswrapper[19803]: I0313 01:32:32.682041 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-console-config\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.682263 master-0 kubenswrapper[19803]: I0313 01:32:32.682198 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-trusted-ca-bundle\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.685316 master-0 kubenswrapper[19803]: I0313 01:32:32.682502 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-oauth-serving-cert\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.685316 master-0 kubenswrapper[19803]: I0313 01:32:32.682591 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-serving-cert\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.685316 master-0 kubenswrapper[19803]: I0313 01:32:32.682648 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-oauth-config\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.784996 master-0 kubenswrapper[19803]: I0313 01:32:32.784866 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-oauth-serving-cert\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.784996 master-0 kubenswrapper[19803]: I0313 01:32:32.784974 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-serving-cert\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.785422 master-0 kubenswrapper[19803]: I0313 01:32:32.785019 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-oauth-config\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.785497 master-0 kubenswrapper[19803]: I0313 01:32:32.785386 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqkpr\" (UniqueName: \"kubernetes.io/projected/462eaab3-2c83-41c1-ad56-0121ee483d42-kube-api-access-dqkpr\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.788000 master-0 kubenswrapper[19803]: I0313 01:32:32.785811 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-service-ca\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.788000 master-0 kubenswrapper[19803]: I0313 01:32:32.786001 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-oauth-serving-cert\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.788000 master-0 kubenswrapper[19803]: I0313 01:32:32.787593 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-service-ca\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.789655 master-0 kubenswrapper[19803]: I0313 01:32:32.789599 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-serving-cert\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.789794 master-0 kubenswrapper[19803]: I0313 01:32:32.789666 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-console-config\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.790661 master-0 kubenswrapper[19803]: I0313 01:32:32.790573 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-console-config\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.790848 master-0 kubenswrapper[19803]: I0313 01:32:32.790728 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-trusted-ca-bundle\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.792560 master-0 kubenswrapper[19803]: I0313 01:32:32.792442 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-trusted-ca-bundle\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.794583 master-0 kubenswrapper[19803]: I0313 01:32:32.794489 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-oauth-config\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.813085 master-0 kubenswrapper[19803]: I0313 01:32:32.812875 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqkpr\" (UniqueName: \"kubernetes.io/projected/462eaab3-2c83-41c1-ad56-0121ee483d42-kube-api-access-dqkpr\") pod \"console-846db4bc94-fklp6\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:32.900064 master-0 kubenswrapper[19803]: I0313 01:32:32.899972 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:33.441044 master-0 kubenswrapper[19803]: I0313 01:32:33.440625 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-846db4bc94-fklp6"] Mar 13 01:32:33.452304 master-0 kubenswrapper[19803]: W0313 01:32:33.452208 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod462eaab3_2c83_41c1_ad56_0121ee483d42.slice/crio-0392951351a8b428ed3d9fd0a6044ce1d028815810bbd9f7b25f2bae675ffcb8 WatchSource:0}: Error finding container 0392951351a8b428ed3d9fd0a6044ce1d028815810bbd9f7b25f2bae675ffcb8: Status 404 returned error can't find the container with id 0392951351a8b428ed3d9fd0a6044ce1d028815810bbd9f7b25f2bae675ffcb8 Mar 13 01:32:34.056387 master-0 kubenswrapper[19803]: I0313 01:32:34.056309 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-monitoring/telemeter-client-7d955bd7d-xxddg"] Mar 13 01:32:34.059136 master-0 kubenswrapper[19803]: I0313 01:32:34.059098 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.063065 master-0 kubenswrapper[19803]: I0313 01:32:34.063031 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-kube-rbac-proxy-config" Mar 13 01:32:34.063256 master-0 kubenswrapper[19803]: I0313 01:32:34.063102 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"federate-client-certs" Mar 13 01:32:34.063332 master-0 kubenswrapper[19803]: I0313 01:32:34.063140 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client-tls" Mar 13 01:32:34.063415 master-0 kubenswrapper[19803]: I0313 01:32:34.063388 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-client-serving-certs-ca-bundle" Mar 13 01:32:34.063550 master-0 kubenswrapper[19803]: I0313 01:32:34.063532 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-monitoring"/"telemeter-client" Mar 13 01:32:34.069227 master-0 kubenswrapper[19803]: I0313 01:32:34.069189 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-monitoring"/"telemeter-trusted-ca-bundle-8i12ta5c71j38" Mar 13 01:32:34.085877 master-0 kubenswrapper[19803]: I0313 01:32:34.085812 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7d955bd7d-xxddg"] Mar 13 01:32:34.214663 master-0 kubenswrapper[19803]: I0313 01:32:34.214599 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.214914 master-0 kubenswrapper[19803]: I0313 01:32:34.214664 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-federate-client-tls\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.214914 master-0 kubenswrapper[19803]: I0313 01:32:34.214735 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-serving-certs-ca-bundle\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.214914 master-0 kubenswrapper[19803]: I0313 01:32:34.214775 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-telemeter-client-tls\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.214914 master-0 kubenswrapper[19803]: I0313 01:32:34.214807 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.214914 master-0 kubenswrapper[19803]: I0313 01:32:34.214841 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-metrics-client-ca\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.214914 master-0 kubenswrapper[19803]: I0313 01:32:34.214876 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-secret-telemeter-client\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.215109 master-0 kubenswrapper[19803]: I0313 01:32:34.214947 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzsj2\" (UniqueName: \"kubernetes.io/projected/da8d30f5-9351-4865-9a0c-a5aae2118684-kube-api-access-hzsj2\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.232087 master-0 kubenswrapper[19803]: I0313 01:32:34.231943 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846db4bc94-fklp6" event={"ID":"462eaab3-2c83-41c1-ad56-0121ee483d42","Type":"ContainerStarted","Data":"7d547e0bf07e2db893b8168863fd6d657ea918fb6af51f15e6a204275c47ec35"} Mar 13 01:32:34.232087 master-0 kubenswrapper[19803]: I0313 01:32:34.232064 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846db4bc94-fklp6" event={"ID":"462eaab3-2c83-41c1-ad56-0121ee483d42","Type":"ContainerStarted","Data":"0392951351a8b428ed3d9fd0a6044ce1d028815810bbd9f7b25f2bae675ffcb8"} Mar 13 01:32:34.277062 master-0 kubenswrapper[19803]: I0313 01:32:34.276927 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-846db4bc94-fklp6" podStartSLOduration=2.276718484 podStartE2EDuration="2.276718484s" podCreationTimestamp="2026-03-13 01:32:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:32:34.271360735 +0000 UTC m=+902.236508434" watchObservedRunningTime="2026-03-13 01:32:34.276718484 +0000 UTC m=+902.241866173" Mar 13 01:32:34.318110 master-0 kubenswrapper[19803]: I0313 01:32:34.317896 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hzsj2\" (UniqueName: \"kubernetes.io/projected/da8d30f5-9351-4865-9a0c-a5aae2118684-kube-api-access-hzsj2\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.318343 master-0 kubenswrapper[19803]: I0313 01:32:34.318213 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.318343 master-0 kubenswrapper[19803]: I0313 01:32:34.318259 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-federate-client-tls\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.318343 master-0 kubenswrapper[19803]: I0313 01:32:34.318290 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-serving-certs-ca-bundle\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.318627 master-0 kubenswrapper[19803]: I0313 01:32:34.318530 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-telemeter-client-tls\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.319335 master-0 kubenswrapper[19803]: I0313 01:32:34.319289 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.319335 master-0 kubenswrapper[19803]: I0313 01:32:34.319318 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-serving-certs-ca-bundle\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.320733 master-0 kubenswrapper[19803]: I0313 01:32:34.319426 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-metrics-client-ca\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.320733 master-0 kubenswrapper[19803]: I0313 01:32:34.319492 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-secret-telemeter-client\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.320838 master-0 kubenswrapper[19803]: I0313 01:32:34.320804 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-client-ca\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-metrics-client-ca\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.321445 master-0 kubenswrapper[19803]: I0313 01:32:34.321389 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/da8d30f5-9351-4865-9a0c-a5aae2118684-telemeter-trusted-ca-bundle\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.323242 master-0 kubenswrapper[19803]: I0313 01:32:34.323208 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"federate-client-tls\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-federate-client-tls\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.325009 master-0 kubenswrapper[19803]: I0313 01:32:34.324874 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-secret-telemeter-client\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.331710 master-0 kubenswrapper[19803]: I0313 01:32:34.331639 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"telemeter-client-tls\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-telemeter-client-tls\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.337950 master-0 kubenswrapper[19803]: I0313 01:32:34.337907 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-telemeter-client-kube-rbac-proxy-config\" (UniqueName: \"kubernetes.io/secret/da8d30f5-9351-4865-9a0c-a5aae2118684-secret-telemeter-client-kube-rbac-proxy-config\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.349924 master-0 kubenswrapper[19803]: I0313 01:32:34.349863 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hzsj2\" (UniqueName: \"kubernetes.io/projected/da8d30f5-9351-4865-9a0c-a5aae2118684-kube-api-access-hzsj2\") pod \"telemeter-client-7d955bd7d-xxddg\" (UID: \"da8d30f5-9351-4865-9a0c-a5aae2118684\") " pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.380480 master-0 kubenswrapper[19803]: I0313 01:32:34.380175 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" Mar 13 01:32:34.933003 master-0 kubenswrapper[19803]: I0313 01:32:34.932660 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-monitoring/telemeter-client-7d955bd7d-xxddg"] Mar 13 01:32:34.934407 master-0 kubenswrapper[19803]: W0313 01:32:34.934352 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda8d30f5_9351_4865_9a0c_a5aae2118684.slice/crio-e57573843fedb070b408225082dcbdb3dd65f86bf085ce0adf16fbc31cc764ec WatchSource:0}: Error finding container e57573843fedb070b408225082dcbdb3dd65f86bf085ce0adf16fbc31cc764ec: Status 404 returned error can't find the container with id e57573843fedb070b408225082dcbdb3dd65f86bf085ce0adf16fbc31cc764ec Mar 13 01:32:34.940648 master-0 kubenswrapper[19803]: I0313 01:32:34.940587 19803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 01:32:35.242436 master-0 kubenswrapper[19803]: I0313 01:32:35.242334 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" event={"ID":"da8d30f5-9351-4865-9a0c-a5aae2118684","Type":"ContainerStarted","Data":"e57573843fedb070b408225082dcbdb3dd65f86bf085ce0adf16fbc31cc764ec"} Mar 13 01:32:38.279449 master-0 kubenswrapper[19803]: I0313 01:32:38.278716 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" event={"ID":"da8d30f5-9351-4865-9a0c-a5aae2118684","Type":"ContainerStarted","Data":"064332dd3e46c49c65fd0efd2d2686b90c85bf909fa9dfb4a08b1bfe83e002e3"} Mar 13 01:32:38.279449 master-0 kubenswrapper[19803]: I0313 01:32:38.278828 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" event={"ID":"da8d30f5-9351-4865-9a0c-a5aae2118684","Type":"ContainerStarted","Data":"cd064f44ee57985bf52e1017e65a9fe922fa7b423efc8075706f8a8a56db8695"} Mar 13 01:32:38.279449 master-0 kubenswrapper[19803]: I0313 01:32:38.278851 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" event={"ID":"da8d30f5-9351-4865-9a0c-a5aae2118684","Type":"ContainerStarted","Data":"2fb9fd0e40a0e08ad5cc8d7faf64c274dd5e9d85fec9707d46f51284ff96e558"} Mar 13 01:32:38.317492 master-0 kubenswrapper[19803]: I0313 01:32:38.317386 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-monitoring/telemeter-client-7d955bd7d-xxddg" podStartSLOduration=2.109184979 podStartE2EDuration="4.317362586s" podCreationTimestamp="2026-03-13 01:32:34 +0000 UTC" firstStartedPulling="2026-03-13 01:32:34.940400766 +0000 UTC m=+902.905548475" lastFinishedPulling="2026-03-13 01:32:37.148578363 +0000 UTC m=+905.113726082" observedRunningTime="2026-03-13 01:32:38.312944722 +0000 UTC m=+906.278092411" watchObservedRunningTime="2026-03-13 01:32:38.317362586 +0000 UTC m=+906.282510275" Mar 13 01:32:39.326337 master-0 kubenswrapper[19803]: I0313 01:32:39.326236 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-846db4bc94-fklp6"] Mar 13 01:32:39.376811 master-0 kubenswrapper[19803]: I0313 01:32:39.376270 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-5858f5fbd6-9dwpn"] Mar 13 01:32:39.378645 master-0 kubenswrapper[19803]: I0313 01:32:39.378563 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.396660 master-0 kubenswrapper[19803]: I0313 01:32:39.396491 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-5858f5fbd6-9dwpn"] Mar 13 01:32:39.533554 master-0 kubenswrapper[19803]: I0313 01:32:39.533461 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-service-ca\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.533554 master-0 kubenswrapper[19803]: I0313 01:32:39.533539 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-serving-cert\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.533961 master-0 kubenswrapper[19803]: I0313 01:32:39.533622 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-oauth-serving-cert\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.533961 master-0 kubenswrapper[19803]: I0313 01:32:39.533645 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-oauth-config\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.533961 master-0 kubenswrapper[19803]: I0313 01:32:39.533672 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cxfc\" (UniqueName: \"kubernetes.io/projected/8f23058f-a564-4869-a7f6-c9b81df47efd-kube-api-access-8cxfc\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.533961 master-0 kubenswrapper[19803]: I0313 01:32:39.533689 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-console-config\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.533961 master-0 kubenswrapper[19803]: I0313 01:32:39.533709 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-trusted-ca-bundle\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.635607 master-0 kubenswrapper[19803]: I0313 01:32:39.635374 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-service-ca\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.635607 master-0 kubenswrapper[19803]: I0313 01:32:39.635465 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-serving-cert\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.635607 master-0 kubenswrapper[19803]: I0313 01:32:39.635499 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-oauth-serving-cert\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.635607 master-0 kubenswrapper[19803]: I0313 01:32:39.635549 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-oauth-config\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.635607 master-0 kubenswrapper[19803]: I0313 01:32:39.635595 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cxfc\" (UniqueName: \"kubernetes.io/projected/8f23058f-a564-4869-a7f6-c9b81df47efd-kube-api-access-8cxfc\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.636120 master-0 kubenswrapper[19803]: I0313 01:32:39.636003 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-console-config\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.636120 master-0 kubenswrapper[19803]: I0313 01:32:39.636093 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-trusted-ca-bundle\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.636443 master-0 kubenswrapper[19803]: I0313 01:32:39.636388 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-service-ca\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.636584 master-0 kubenswrapper[19803]: I0313 01:32:39.636549 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-oauth-serving-cert\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.637136 master-0 kubenswrapper[19803]: I0313 01:32:39.637083 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-console-config\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.638930 master-0 kubenswrapper[19803]: I0313 01:32:39.638854 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-trusted-ca-bundle\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.641331 master-0 kubenswrapper[19803]: I0313 01:32:39.641271 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-serving-cert\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.644692 master-0 kubenswrapper[19803]: I0313 01:32:39.644609 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-oauth-config\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.672703 master-0 kubenswrapper[19803]: I0313 01:32:39.672606 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cxfc\" (UniqueName: \"kubernetes.io/projected/8f23058f-a564-4869-a7f6-c9b81df47efd-kube-api-access-8cxfc\") pod \"console-5858f5fbd6-9dwpn\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:39.702313 master-0 kubenswrapper[19803]: I0313 01:32:39.702113 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:40.124674 master-0 kubenswrapper[19803]: I0313 01:32:40.124579 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5858f5fbd6-9dwpn"] Mar 13 01:32:40.169183 master-0 kubenswrapper[19803]: I0313 01:32:40.169101 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-59c5b4f6c8-xvqg6"] Mar 13 01:32:40.170359 master-0 kubenswrapper[19803]: I0313 01:32:40.170282 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.189353 master-0 kubenswrapper[19803]: I0313 01:32:40.189293 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59c5b4f6c8-xvqg6"] Mar 13 01:32:40.197268 master-0 kubenswrapper[19803]: I0313 01:32:40.197215 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5858f5fbd6-9dwpn"] Mar 13 01:32:40.198609 master-0 kubenswrapper[19803]: W0313 01:32:40.198410 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8f23058f_a564_4869_a7f6_c9b81df47efd.slice/crio-8f7a9a76263dbe123b86b7a712346b7feecbfd5f46054c67e92f551254fbbc4a WatchSource:0}: Error finding container 8f7a9a76263dbe123b86b7a712346b7feecbfd5f46054c67e92f551254fbbc4a: Status 404 returned error can't find the container with id 8f7a9a76263dbe123b86b7a712346b7feecbfd5f46054c67e92f551254fbbc4a Mar 13 01:32:40.312797 master-0 kubenswrapper[19803]: I0313 01:32:40.312587 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5858f5fbd6-9dwpn" event={"ID":"8f23058f-a564-4869-a7f6-c9b81df47efd","Type":"ContainerStarted","Data":"8f7a9a76263dbe123b86b7a712346b7feecbfd5f46054c67e92f551254fbbc4a"} Mar 13 01:32:40.354504 master-0 kubenswrapper[19803]: I0313 01:32:40.354447 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-serving-cert\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.355088 master-0 kubenswrapper[19803]: I0313 01:32:40.354530 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-oauth-config\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.355088 master-0 kubenswrapper[19803]: I0313 01:32:40.354556 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-config\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.355088 master-0 kubenswrapper[19803]: I0313 01:32:40.354617 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-service-ca\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.355088 master-0 kubenswrapper[19803]: I0313 01:32:40.354633 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-trusted-ca-bundle\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.355088 master-0 kubenswrapper[19803]: I0313 01:32:40.354695 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l86vt\" (UniqueName: \"kubernetes.io/projected/43107d0a-efa1-46b4-b0ae-8029f21b46ad-kube-api-access-l86vt\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.355088 master-0 kubenswrapper[19803]: I0313 01:32:40.354730 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-oauth-serving-cert\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.456357 master-0 kubenswrapper[19803]: I0313 01:32:40.456279 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l86vt\" (UniqueName: \"kubernetes.io/projected/43107d0a-efa1-46b4-b0ae-8029f21b46ad-kube-api-access-l86vt\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.456649 master-0 kubenswrapper[19803]: I0313 01:32:40.456397 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-oauth-serving-cert\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.456649 master-0 kubenswrapper[19803]: I0313 01:32:40.456435 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-serving-cert\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.456649 master-0 kubenswrapper[19803]: I0313 01:32:40.456488 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-oauth-config\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.456649 master-0 kubenswrapper[19803]: I0313 01:32:40.456518 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-config\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.456649 master-0 kubenswrapper[19803]: I0313 01:32:40.456611 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-service-ca\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.456649 master-0 kubenswrapper[19803]: I0313 01:32:40.456628 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-trusted-ca-bundle\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.457950 master-0 kubenswrapper[19803]: I0313 01:32:40.457910 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-service-ca\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.459424 master-0 kubenswrapper[19803]: I0313 01:32:40.459393 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-config\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.464663 master-0 kubenswrapper[19803]: I0313 01:32:40.464626 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-oauth-serving-cert\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.464663 master-0 kubenswrapper[19803]: I0313 01:32:40.464654 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-serving-cert\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.464971 master-0 kubenswrapper[19803]: I0313 01:32:40.464939 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-oauth-config\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.465432 master-0 kubenswrapper[19803]: I0313 01:32:40.465390 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-trusted-ca-bundle\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.487171 master-0 kubenswrapper[19803]: I0313 01:32:40.487114 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l86vt\" (UniqueName: \"kubernetes.io/projected/43107d0a-efa1-46b4-b0ae-8029f21b46ad-kube-api-access-l86vt\") pod \"console-59c5b4f6c8-xvqg6\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.500288 master-0 kubenswrapper[19803]: I0313 01:32:40.500156 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:40.955616 master-0 kubenswrapper[19803]: I0313 01:32:40.954027 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-59c5b4f6c8-xvqg6"] Mar 13 01:32:41.336429 master-0 kubenswrapper[19803]: I0313 01:32:41.336353 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5858f5fbd6-9dwpn" event={"ID":"8f23058f-a564-4869-a7f6-c9b81df47efd","Type":"ContainerStarted","Data":"9b8d89db147932dc214c7ad10fc59fc9ceb1b757a60976a48e0ae5d4de8daaa8"} Mar 13 01:32:41.339099 master-0 kubenswrapper[19803]: I0313 01:32:41.339021 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59c5b4f6c8-xvqg6" event={"ID":"43107d0a-efa1-46b4-b0ae-8029f21b46ad","Type":"ContainerStarted","Data":"625c63aaa079ea37a3add2c597ae342fdf4ce128aac041cabf25a70180fc9340"} Mar 13 01:32:41.339265 master-0 kubenswrapper[19803]: I0313 01:32:41.339125 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59c5b4f6c8-xvqg6" event={"ID":"43107d0a-efa1-46b4-b0ae-8029f21b46ad","Type":"ContainerStarted","Data":"0c6752354f3eae6ff880a852d4ed1cbe8033adb351dd1f3fff35520473017989"} Mar 13 01:32:41.379559 master-0 kubenswrapper[19803]: I0313 01:32:41.376592 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-5858f5fbd6-9dwpn" podStartSLOduration=2.376486271 podStartE2EDuration="2.376486271s" podCreationTimestamp="2026-03-13 01:32:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:32:41.373311039 +0000 UTC m=+909.338458708" watchObservedRunningTime="2026-03-13 01:32:41.376486271 +0000 UTC m=+909.341633960" Mar 13 01:32:41.404107 master-0 kubenswrapper[19803]: I0313 01:32:41.403970 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-59c5b4f6c8-xvqg6" podStartSLOduration=1.403945381 podStartE2EDuration="1.403945381s" podCreationTimestamp="2026-03-13 01:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:32:41.401356355 +0000 UTC m=+909.366504044" watchObservedRunningTime="2026-03-13 01:32:41.403945381 +0000 UTC m=+909.369093070" Mar 13 01:32:42.900244 master-0 kubenswrapper[19803]: I0313 01:32:42.900165 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:32:49.702715 master-0 kubenswrapper[19803]: I0313 01:32:49.702617 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:32:50.502074 master-0 kubenswrapper[19803]: I0313 01:32:50.501973 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:50.502411 master-0 kubenswrapper[19803]: I0313 01:32:50.502290 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:50.509695 master-0 kubenswrapper[19803]: I0313 01:32:50.509632 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:51.453428 master-0 kubenswrapper[19803]: I0313 01:32:51.451764 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:32:51.566822 master-0 kubenswrapper[19803]: I0313 01:32:51.566735 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cd7664db7-4ljbn"] Mar 13 01:33:04.384741 master-0 kubenswrapper[19803]: I0313 01:33:04.383702 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-846db4bc94-fklp6" podUID="462eaab3-2c83-41c1-ad56-0121ee483d42" containerName="console" containerID="cri-o://7d547e0bf07e2db893b8168863fd6d657ea918fb6af51f15e6a204275c47ec35" gracePeriod=15 Mar 13 01:33:04.589074 master-0 kubenswrapper[19803]: I0313 01:33:04.589022 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-846db4bc94-fklp6_462eaab3-2c83-41c1-ad56-0121ee483d42/console/0.log" Mar 13 01:33:04.589567 master-0 kubenswrapper[19803]: I0313 01:33:04.589499 19803 generic.go:334] "Generic (PLEG): container finished" podID="462eaab3-2c83-41c1-ad56-0121ee483d42" containerID="7d547e0bf07e2db893b8168863fd6d657ea918fb6af51f15e6a204275c47ec35" exitCode=2 Mar 13 01:33:04.589730 master-0 kubenswrapper[19803]: I0313 01:33:04.589587 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846db4bc94-fklp6" event={"ID":"462eaab3-2c83-41c1-ad56-0121ee483d42","Type":"ContainerDied","Data":"7d547e0bf07e2db893b8168863fd6d657ea918fb6af51f15e6a204275c47ec35"} Mar 13 01:33:04.889055 master-0 kubenswrapper[19803]: I0313 01:33:04.888984 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-846db4bc94-fklp6_462eaab3-2c83-41c1-ad56-0121ee483d42/console/0.log" Mar 13 01:33:04.889275 master-0 kubenswrapper[19803]: I0313 01:33:04.889140 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.950731 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-service-ca\") pod \"462eaab3-2c83-41c1-ad56-0121ee483d42\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.951050 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-oauth-serving-cert\") pod \"462eaab3-2c83-41c1-ad56-0121ee483d42\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.951200 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-serving-cert\") pod \"462eaab3-2c83-41c1-ad56-0121ee483d42\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.951261 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-trusted-ca-bundle\") pod \"462eaab3-2c83-41c1-ad56-0121ee483d42\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.951348 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-oauth-config\") pod \"462eaab3-2c83-41c1-ad56-0121ee483d42\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.951440 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-service-ca" (OuterVolumeSpecName: "service-ca") pod "462eaab3-2c83-41c1-ad56-0121ee483d42" (UID: "462eaab3-2c83-41c1-ad56-0121ee483d42"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.951458 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-console-config\") pod \"462eaab3-2c83-41c1-ad56-0121ee483d42\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.951603 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dqkpr\" (UniqueName: \"kubernetes.io/projected/462eaab3-2c83-41c1-ad56-0121ee483d42-kube-api-access-dqkpr\") pod \"462eaab3-2c83-41c1-ad56-0121ee483d42\" (UID: \"462eaab3-2c83-41c1-ad56-0121ee483d42\") " Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.952656 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-console-config" (OuterVolumeSpecName: "console-config") pod "462eaab3-2c83-41c1-ad56-0121ee483d42" (UID: "462eaab3-2c83-41c1-ad56-0121ee483d42"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.953358 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "462eaab3-2c83-41c1-ad56-0121ee483d42" (UID: "462eaab3-2c83-41c1-ad56-0121ee483d42"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.954255 19803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.954280 19803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:04.955008 master-0 kubenswrapper[19803]: I0313 01:33:04.954295 19803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:04.958275 master-0 kubenswrapper[19803]: I0313 01:33:04.956211 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "462eaab3-2c83-41c1-ad56-0121ee483d42" (UID: "462eaab3-2c83-41c1-ad56-0121ee483d42"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:04.958275 master-0 kubenswrapper[19803]: I0313 01:33:04.957874 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/462eaab3-2c83-41c1-ad56-0121ee483d42-kube-api-access-dqkpr" (OuterVolumeSpecName: "kube-api-access-dqkpr") pod "462eaab3-2c83-41c1-ad56-0121ee483d42" (UID: "462eaab3-2c83-41c1-ad56-0121ee483d42"). InnerVolumeSpecName "kube-api-access-dqkpr". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:33:04.966549 master-0 kubenswrapper[19803]: I0313 01:33:04.961575 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "462eaab3-2c83-41c1-ad56-0121ee483d42" (UID: "462eaab3-2c83-41c1-ad56-0121ee483d42"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:33:04.967989 master-0 kubenswrapper[19803]: I0313 01:33:04.967622 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "462eaab3-2c83-41c1-ad56-0121ee483d42" (UID: "462eaab3-2c83-41c1-ad56-0121ee483d42"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:33:05.056440 master-0 kubenswrapper[19803]: I0313 01:33:05.056377 19803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/462eaab3-2c83-41c1-ad56-0121ee483d42-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:05.056440 master-0 kubenswrapper[19803]: I0313 01:33:05.056425 19803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:05.056440 master-0 kubenswrapper[19803]: I0313 01:33:05.056441 19803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/462eaab3-2c83-41c1-ad56-0121ee483d42-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:05.056440 master-0 kubenswrapper[19803]: I0313 01:33:05.056455 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dqkpr\" (UniqueName: \"kubernetes.io/projected/462eaab3-2c83-41c1-ad56-0121ee483d42-kube-api-access-dqkpr\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:05.604057 master-0 kubenswrapper[19803]: I0313 01:33:05.603990 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-846db4bc94-fklp6_462eaab3-2c83-41c1-ad56-0121ee483d42/console/0.log" Mar 13 01:33:05.604736 master-0 kubenswrapper[19803]: I0313 01:33:05.604073 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-846db4bc94-fklp6" event={"ID":"462eaab3-2c83-41c1-ad56-0121ee483d42","Type":"ContainerDied","Data":"0392951351a8b428ed3d9fd0a6044ce1d028815810bbd9f7b25f2bae675ffcb8"} Mar 13 01:33:05.604736 master-0 kubenswrapper[19803]: I0313 01:33:05.604122 19803 scope.go:117] "RemoveContainer" containerID="7d547e0bf07e2db893b8168863fd6d657ea918fb6af51f15e6a204275c47ec35" Mar 13 01:33:05.604736 master-0 kubenswrapper[19803]: I0313 01:33:05.604374 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-846db4bc94-fklp6" Mar 13 01:33:05.658808 master-0 kubenswrapper[19803]: I0313 01:33:05.658718 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-846db4bc94-fklp6"] Mar 13 01:33:05.668661 master-0 kubenswrapper[19803]: I0313 01:33:05.668449 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-846db4bc94-fklp6"] Mar 13 01:33:06.329450 master-0 kubenswrapper[19803]: I0313 01:33:06.329350 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="462eaab3-2c83-41c1-ad56-0121ee483d42" path="/var/lib/kubelet/pods/462eaab3-2c83-41c1-ad56-0121ee483d42/volumes" Mar 13 01:33:06.389877 master-0 kubenswrapper[19803]: I0313 01:33:06.389809 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5858f5fbd6-9dwpn" podUID="8f23058f-a564-4869-a7f6-c9b81df47efd" containerName="console" containerID="cri-o://9b8d89db147932dc214c7ad10fc59fc9ceb1b757a60976a48e0ae5d4de8daaa8" gracePeriod=15 Mar 13 01:33:06.634949 master-0 kubenswrapper[19803]: I0313 01:33:06.634798 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5858f5fbd6-9dwpn_8f23058f-a564-4869-a7f6-c9b81df47efd/console/0.log" Mar 13 01:33:06.634949 master-0 kubenswrapper[19803]: I0313 01:33:06.634882 19803 generic.go:334] "Generic (PLEG): container finished" podID="8f23058f-a564-4869-a7f6-c9b81df47efd" containerID="9b8d89db147932dc214c7ad10fc59fc9ceb1b757a60976a48e0ae5d4de8daaa8" exitCode=2 Mar 13 01:33:06.635781 master-0 kubenswrapper[19803]: I0313 01:33:06.634964 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5858f5fbd6-9dwpn" event={"ID":"8f23058f-a564-4869-a7f6-c9b81df47efd","Type":"ContainerDied","Data":"9b8d89db147932dc214c7ad10fc59fc9ceb1b757a60976a48e0ae5d4de8daaa8"} Mar 13 01:33:06.930896 master-0 kubenswrapper[19803]: I0313 01:33:06.930867 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5858f5fbd6-9dwpn_8f23058f-a564-4869-a7f6-c9b81df47efd/console/0.log" Mar 13 01:33:06.931116 master-0 kubenswrapper[19803]: I0313 01:33:06.931091 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:33:06.995685 master-0 kubenswrapper[19803]: I0313 01:33:06.995624 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cxfc\" (UniqueName: \"kubernetes.io/projected/8f23058f-a564-4869-a7f6-c9b81df47efd-kube-api-access-8cxfc\") pod \"8f23058f-a564-4869-a7f6-c9b81df47efd\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " Mar 13 01:33:06.995899 master-0 kubenswrapper[19803]: I0313 01:33:06.995717 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-trusted-ca-bundle\") pod \"8f23058f-a564-4869-a7f6-c9b81df47efd\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " Mar 13 01:33:06.995899 master-0 kubenswrapper[19803]: I0313 01:33:06.995766 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-service-ca\") pod \"8f23058f-a564-4869-a7f6-c9b81df47efd\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " Mar 13 01:33:06.995899 master-0 kubenswrapper[19803]: I0313 01:33:06.995795 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-oauth-serving-cert\") pod \"8f23058f-a564-4869-a7f6-c9b81df47efd\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " Mar 13 01:33:06.997097 master-0 kubenswrapper[19803]: I0313 01:33:06.997047 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-serving-cert\") pod \"8f23058f-a564-4869-a7f6-c9b81df47efd\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " Mar 13 01:33:06.997223 master-0 kubenswrapper[19803]: I0313 01:33:06.997157 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-oauth-config\") pod \"8f23058f-a564-4869-a7f6-c9b81df47efd\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " Mar 13 01:33:06.997223 master-0 kubenswrapper[19803]: I0313 01:33:06.997178 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-console-config\") pod \"8f23058f-a564-4869-a7f6-c9b81df47efd\" (UID: \"8f23058f-a564-4869-a7f6-c9b81df47efd\") " Mar 13 01:33:06.997741 master-0 kubenswrapper[19803]: I0313 01:33:06.997713 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "8f23058f-a564-4869-a7f6-c9b81df47efd" (UID: "8f23058f-a564-4869-a7f6-c9b81df47efd"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:06.997893 master-0 kubenswrapper[19803]: I0313 01:33:06.997868 19803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:06.998246 master-0 kubenswrapper[19803]: I0313 01:33:06.998221 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-console-config" (OuterVolumeSpecName: "console-config") pod "8f23058f-a564-4869-a7f6-c9b81df47efd" (UID: "8f23058f-a564-4869-a7f6-c9b81df47efd"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:06.998360 master-0 kubenswrapper[19803]: I0313 01:33:06.998256 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-service-ca" (OuterVolumeSpecName: "service-ca") pod "8f23058f-a564-4869-a7f6-c9b81df47efd" (UID: "8f23058f-a564-4869-a7f6-c9b81df47efd"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:06.998360 master-0 kubenswrapper[19803]: I0313 01:33:06.998319 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "8f23058f-a564-4869-a7f6-c9b81df47efd" (UID: "8f23058f-a564-4869-a7f6-c9b81df47efd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:07.000308 master-0 kubenswrapper[19803]: I0313 01:33:07.000266 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "8f23058f-a564-4869-a7f6-c9b81df47efd" (UID: "8f23058f-a564-4869-a7f6-c9b81df47efd"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:33:07.002149 master-0 kubenswrapper[19803]: I0313 01:33:07.002103 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "8f23058f-a564-4869-a7f6-c9b81df47efd" (UID: "8f23058f-a564-4869-a7f6-c9b81df47efd"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:33:07.002795 master-0 kubenswrapper[19803]: I0313 01:33:07.002751 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f23058f-a564-4869-a7f6-c9b81df47efd-kube-api-access-8cxfc" (OuterVolumeSpecName: "kube-api-access-8cxfc") pod "8f23058f-a564-4869-a7f6-c9b81df47efd" (UID: "8f23058f-a564-4869-a7f6-c9b81df47efd"). InnerVolumeSpecName "kube-api-access-8cxfc". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:33:07.099615 master-0 kubenswrapper[19803]: I0313 01:33:07.099542 19803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:07.099615 master-0 kubenswrapper[19803]: I0313 01:33:07.099611 19803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/8f23058f-a564-4869-a7f6-c9b81df47efd-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:07.099965 master-0 kubenswrapper[19803]: I0313 01:33:07.099635 19803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:07.099965 master-0 kubenswrapper[19803]: I0313 01:33:07.099658 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cxfc\" (UniqueName: \"kubernetes.io/projected/8f23058f-a564-4869-a7f6-c9b81df47efd-kube-api-access-8cxfc\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:07.099965 master-0 kubenswrapper[19803]: I0313 01:33:07.099677 19803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:07.099965 master-0 kubenswrapper[19803]: I0313 01:33:07.099699 19803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/8f23058f-a564-4869-a7f6-c9b81df47efd-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:07.649896 master-0 kubenswrapper[19803]: I0313 01:33:07.649826 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5858f5fbd6-9dwpn_8f23058f-a564-4869-a7f6-c9b81df47efd/console/0.log" Mar 13 01:33:07.650505 master-0 kubenswrapper[19803]: I0313 01:33:07.649933 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5858f5fbd6-9dwpn" event={"ID":"8f23058f-a564-4869-a7f6-c9b81df47efd","Type":"ContainerDied","Data":"8f7a9a76263dbe123b86b7a712346b7feecbfd5f46054c67e92f551254fbbc4a"} Mar 13 01:33:07.650505 master-0 kubenswrapper[19803]: I0313 01:33:07.650004 19803 scope.go:117] "RemoveContainer" containerID="9b8d89db147932dc214c7ad10fc59fc9ceb1b757a60976a48e0ae5d4de8daaa8" Mar 13 01:33:07.650505 master-0 kubenswrapper[19803]: I0313 01:33:07.650038 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5858f5fbd6-9dwpn" Mar 13 01:33:07.697762 master-0 kubenswrapper[19803]: I0313 01:33:07.697685 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5858f5fbd6-9dwpn"] Mar 13 01:33:07.704254 master-0 kubenswrapper[19803]: I0313 01:33:07.704191 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5858f5fbd6-9dwpn"] Mar 13 01:33:08.325701 master-0 kubenswrapper[19803]: I0313 01:33:08.325523 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f23058f-a564-4869-a7f6-c9b81df47efd" path="/var/lib/kubelet/pods/8f23058f-a564-4869-a7f6-c9b81df47efd/volumes" Mar 13 01:33:16.630119 master-0 kubenswrapper[19803]: I0313 01:33:16.629985 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-5cd7664db7-4ljbn" podUID="e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" containerName="console" containerID="cri-o://361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e" gracePeriod=15 Mar 13 01:33:17.196288 master-0 kubenswrapper[19803]: I0313 01:33:17.196239 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cd7664db7-4ljbn_e67a6f8f-fda6-408b-adaa-6d34ba7fb34b/console/0.log" Mar 13 01:33:17.196557 master-0 kubenswrapper[19803]: I0313 01:33:17.196320 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:33:17.275896 master-0 kubenswrapper[19803]: I0313 01:33:17.275831 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-service-ca\") pod \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " Mar 13 01:33:17.276163 master-0 kubenswrapper[19803]: I0313 01:33:17.276053 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-oauth-config\") pod \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " Mar 13 01:33:17.276856 master-0 kubenswrapper[19803]: I0313 01:33:17.276789 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-service-ca" (OuterVolumeSpecName: "service-ca") pod "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" (UID: "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:17.276856 master-0 kubenswrapper[19803]: I0313 01:33:17.276849 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-serving-cert\") pod \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " Mar 13 01:33:17.276970 master-0 kubenswrapper[19803]: I0313 01:33:17.276892 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-config\") pod \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " Mar 13 01:33:17.276970 master-0 kubenswrapper[19803]: I0313 01:33:17.276926 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8924\" (UniqueName: \"kubernetes.io/projected/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-kube-api-access-k8924\") pod \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " Mar 13 01:33:17.276970 master-0 kubenswrapper[19803]: I0313 01:33:17.276957 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-trusted-ca-bundle\") pod \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " Mar 13 01:33:17.277132 master-0 kubenswrapper[19803]: I0313 01:33:17.276982 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-oauth-serving-cert\") pod \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\" (UID: \"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b\") " Mar 13 01:33:17.277545 master-0 kubenswrapper[19803]: I0313 01:33:17.277472 19803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:17.278018 master-0 kubenswrapper[19803]: I0313 01:33:17.277970 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" (UID: "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:17.278375 master-0 kubenswrapper[19803]: I0313 01:33:17.278321 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" (UID: "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:17.278671 master-0 kubenswrapper[19803]: I0313 01:33:17.278607 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-config" (OuterVolumeSpecName: "console-config") pod "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" (UID: "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:33:17.279269 master-0 kubenswrapper[19803]: I0313 01:33:17.279220 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" (UID: "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:33:17.279853 master-0 kubenswrapper[19803]: I0313 01:33:17.279810 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" (UID: "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:33:17.280810 master-0 kubenswrapper[19803]: I0313 01:33:17.280744 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-kube-api-access-k8924" (OuterVolumeSpecName: "kube-api-access-k8924") pod "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" (UID: "e67a6f8f-fda6-408b-adaa-6d34ba7fb34b"). InnerVolumeSpecName "kube-api-access-k8924". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:33:17.379184 master-0 kubenswrapper[19803]: I0313 01:33:17.378929 19803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:17.379184 master-0 kubenswrapper[19803]: I0313 01:33:17.378970 19803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:17.379184 master-0 kubenswrapper[19803]: I0313 01:33:17.378980 19803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:17.379184 master-0 kubenswrapper[19803]: I0313 01:33:17.378990 19803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:17.379184 master-0 kubenswrapper[19803]: I0313 01:33:17.379001 19803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:17.379184 master-0 kubenswrapper[19803]: I0313 01:33:17.379010 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8924\" (UniqueName: \"kubernetes.io/projected/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b-kube-api-access-k8924\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:17.759568 master-0 kubenswrapper[19803]: I0313 01:33:17.759533 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-5cd7664db7-4ljbn_e67a6f8f-fda6-408b-adaa-6d34ba7fb34b/console/0.log" Mar 13 01:33:17.760306 master-0 kubenswrapper[19803]: I0313 01:33:17.760272 19803 generic.go:334] "Generic (PLEG): container finished" podID="e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" containerID="361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e" exitCode=2 Mar 13 01:33:17.760458 master-0 kubenswrapper[19803]: I0313 01:33:17.760400 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-5cd7664db7-4ljbn" Mar 13 01:33:17.760611 master-0 kubenswrapper[19803]: I0313 01:33:17.760411 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cd7664db7-4ljbn" event={"ID":"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b","Type":"ContainerDied","Data":"361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e"} Mar 13 01:33:17.760693 master-0 kubenswrapper[19803]: I0313 01:33:17.760660 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-5cd7664db7-4ljbn" event={"ID":"e67a6f8f-fda6-408b-adaa-6d34ba7fb34b","Type":"ContainerDied","Data":"ff72e194cfed7b4d6896effc625ddce61ec30fea3948172abb83b79b2d88ad8e"} Mar 13 01:33:17.760753 master-0 kubenswrapper[19803]: I0313 01:33:17.760715 19803 scope.go:117] "RemoveContainer" containerID="361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e" Mar 13 01:33:17.791250 master-0 kubenswrapper[19803]: I0313 01:33:17.791158 19803 scope.go:117] "RemoveContainer" containerID="361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e" Mar 13 01:33:17.792830 master-0 kubenswrapper[19803]: E0313 01:33:17.792750 19803 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e\": container with ID starting with 361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e not found: ID does not exist" containerID="361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e" Mar 13 01:33:17.792993 master-0 kubenswrapper[19803]: I0313 01:33:17.792951 19803 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e"} err="failed to get container status \"361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e\": rpc error: code = NotFound desc = could not find container \"361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e\": container with ID starting with 361e415ae4ba50de4aa20b9fd7d1ce6fd4dab9a700ebdecb419062b10dd47f0e not found: ID does not exist" Mar 13 01:33:17.825401 master-0 kubenswrapper[19803]: I0313 01:33:17.825065 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-5cd7664db7-4ljbn"] Mar 13 01:33:17.836222 master-0 kubenswrapper[19803]: I0313 01:33:17.836134 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-5cd7664db7-4ljbn"] Mar 13 01:33:18.332561 master-0 kubenswrapper[19803]: I0313 01:33:18.332464 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" path="/var/lib/kubelet/pods/e67a6f8f-fda6-408b-adaa-6d34ba7fb34b/volumes" Mar 13 01:33:37.365916 master-0 kubenswrapper[19803]: I0313 01:33:37.365839 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q"] Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: E0313 01:33:37.366158 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: I0313 01:33:37.366174 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: E0313 01:33:37.366189 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f23058f-a564-4869-a7f6-c9b81df47efd" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: I0313 01:33:37.366195 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f23058f-a564-4869-a7f6-c9b81df47efd" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: E0313 01:33:37.366212 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="462eaab3-2c83-41c1-ad56-0121ee483d42" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: I0313 01:33:37.366219 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="462eaab3-2c83-41c1-ad56-0121ee483d42" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: I0313 01:33:37.366345 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="462eaab3-2c83-41c1-ad56-0121ee483d42" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: I0313 01:33:37.366390 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f23058f-a564-4869-a7f6-c9b81df47efd" containerName="console" Mar 13 01:33:37.366805 master-0 kubenswrapper[19803]: I0313 01:33:37.366409 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="e67a6f8f-fda6-408b-adaa-6d34ba7fb34b" containerName="console" Mar 13 01:33:37.367338 master-0 kubenswrapper[19803]: I0313 01:33:37.367306 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.379214 master-0 kubenswrapper[19803]: I0313 01:33:37.379149 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q"] Mar 13 01:33:37.534553 master-0 kubenswrapper[19803]: I0313 01:33:37.534439 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.534553 master-0 kubenswrapper[19803]: I0313 01:33:37.534572 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.534926 master-0 kubenswrapper[19803]: I0313 01:33:37.534606 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vff9w\" (UniqueName: \"kubernetes.io/projected/dbee59b6-8c15-493f-9c2e-43f755507662-kube-api-access-vff9w\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.637072 master-0 kubenswrapper[19803]: I0313 01:33:37.636936 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.637265 master-0 kubenswrapper[19803]: I0313 01:33:37.637084 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.637265 master-0 kubenswrapper[19803]: I0313 01:33:37.637158 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vff9w\" (UniqueName: \"kubernetes.io/projected/dbee59b6-8c15-493f-9c2e-43f755507662-kube-api-access-vff9w\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.638414 master-0 kubenswrapper[19803]: I0313 01:33:37.638344 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-bundle\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.638565 master-0 kubenswrapper[19803]: I0313 01:33:37.638481 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-util\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.663582 master-0 kubenswrapper[19803]: I0313 01:33:37.658765 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vff9w\" (UniqueName: \"kubernetes.io/projected/dbee59b6-8c15-493f-9c2e-43f755507662-kube-api-access-vff9w\") pod \"7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:37.730397 master-0 kubenswrapper[19803]: I0313 01:33:37.730290 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:38.042212 master-0 kubenswrapper[19803]: I0313 01:33:38.042149 19803 scope.go:117] "RemoveContainer" containerID="b695d42371df758d1a7c1ba4450073ea3c8b6d48c4320403e34e1092182489bd" Mar 13 01:33:38.060614 master-0 kubenswrapper[19803]: I0313 01:33:38.060571 19803 scope.go:117] "RemoveContainer" containerID="af3dea87089055ed8ff0a504beb660d463839e3f0a89b7384e4a83b81ca39cd2" Mar 13 01:33:38.087736 master-0 kubenswrapper[19803]: I0313 01:33:38.087675 19803 scope.go:117] "RemoveContainer" containerID="bf44bf0654c243447f5c2eddd5cb8108dd3746163d5c74fb0917f512b255102e" Mar 13 01:33:38.114472 master-0 kubenswrapper[19803]: I0313 01:33:38.114433 19803 scope.go:117] "RemoveContainer" containerID="3a0472a659129f987f1d91c84295078b1dadc74543f77b94adee51424a3773b8" Mar 13 01:33:38.142849 master-0 kubenswrapper[19803]: I0313 01:33:38.142801 19803 scope.go:117] "RemoveContainer" containerID="327852a029bfd0e834d21248720570e2f7ef7a434e195599fde2db98c26f8e41" Mar 13 01:33:38.163010 master-0 kubenswrapper[19803]: I0313 01:33:38.162959 19803 scope.go:117] "RemoveContainer" containerID="aae2a34209a7f70578604cbdaf885049b779a8cdbb0f4b62cc513666e9bd8b15" Mar 13 01:33:38.189297 master-0 kubenswrapper[19803]: I0313 01:33:38.189124 19803 scope.go:117] "RemoveContainer" containerID="c1f4c96a645b26f09b2c0582119a2127c438791eb500b75817b09119417c519f" Mar 13 01:33:38.213718 master-0 kubenswrapper[19803]: I0313 01:33:38.213656 19803 scope.go:117] "RemoveContainer" containerID="87e7d839ee2a53e1c3f74a54b26e92cfb08db8934c88bf727c2b174e10eaeb14" Mar 13 01:33:38.237316 master-0 kubenswrapper[19803]: I0313 01:33:38.236949 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q"] Mar 13 01:33:38.972452 master-0 kubenswrapper[19803]: I0313 01:33:38.972259 19803 generic.go:334] "Generic (PLEG): container finished" podID="dbee59b6-8c15-493f-9c2e-43f755507662" containerID="41443a03e92a69e065f37c2315e0996a0b0fd8a2f3efaae03aae04a8bed7aff0" exitCode=0 Mar 13 01:33:38.972452 master-0 kubenswrapper[19803]: I0313 01:33:38.972338 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" event={"ID":"dbee59b6-8c15-493f-9c2e-43f755507662","Type":"ContainerDied","Data":"41443a03e92a69e065f37c2315e0996a0b0fd8a2f3efaae03aae04a8bed7aff0"} Mar 13 01:33:38.972452 master-0 kubenswrapper[19803]: I0313 01:33:38.972414 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" event={"ID":"dbee59b6-8c15-493f-9c2e-43f755507662","Type":"ContainerStarted","Data":"77a236ef7fce4231e27416c7bc95d20985e83fc3c17090caec6d85196490a454"} Mar 13 01:33:40.993960 master-0 kubenswrapper[19803]: I0313 01:33:40.993865 19803 generic.go:334] "Generic (PLEG): container finished" podID="dbee59b6-8c15-493f-9c2e-43f755507662" containerID="a54b8ed994042565a2862b95d4e88c9669a83eea9ebe96c4b9a5d02973824061" exitCode=0 Mar 13 01:33:40.994830 master-0 kubenswrapper[19803]: I0313 01:33:40.993962 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" event={"ID":"dbee59b6-8c15-493f-9c2e-43f755507662","Type":"ContainerDied","Data":"a54b8ed994042565a2862b95d4e88c9669a83eea9ebe96c4b9a5d02973824061"} Mar 13 01:33:42.009817 master-0 kubenswrapper[19803]: I0313 01:33:42.009590 19803 generic.go:334] "Generic (PLEG): container finished" podID="dbee59b6-8c15-493f-9c2e-43f755507662" containerID="b5a530849e30baa97a5e11435de16c597486a2d9b63ff5b67a5aa89f70501a67" exitCode=0 Mar 13 01:33:42.009817 master-0 kubenswrapper[19803]: I0313 01:33:42.009708 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" event={"ID":"dbee59b6-8c15-493f-9c2e-43f755507662","Type":"ContainerDied","Data":"b5a530849e30baa97a5e11435de16c597486a2d9b63ff5b67a5aa89f70501a67"} Mar 13 01:33:43.335447 master-0 kubenswrapper[19803]: I0313 01:33:43.335332 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:43.465559 master-0 kubenswrapper[19803]: I0313 01:33:43.463069 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-util\") pod \"dbee59b6-8c15-493f-9c2e-43f755507662\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " Mar 13 01:33:43.465559 master-0 kubenswrapper[19803]: I0313 01:33:43.463289 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vff9w\" (UniqueName: \"kubernetes.io/projected/dbee59b6-8c15-493f-9c2e-43f755507662-kube-api-access-vff9w\") pod \"dbee59b6-8c15-493f-9c2e-43f755507662\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " Mar 13 01:33:43.465559 master-0 kubenswrapper[19803]: I0313 01:33:43.464428 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-bundle\") pod \"dbee59b6-8c15-493f-9c2e-43f755507662\" (UID: \"dbee59b6-8c15-493f-9c2e-43f755507662\") " Mar 13 01:33:43.468553 master-0 kubenswrapper[19803]: I0313 01:33:43.467347 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-bundle" (OuterVolumeSpecName: "bundle") pod "dbee59b6-8c15-493f-9c2e-43f755507662" (UID: "dbee59b6-8c15-493f-9c2e-43f755507662"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:33:43.473076 master-0 kubenswrapper[19803]: I0313 01:33:43.473006 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbee59b6-8c15-493f-9c2e-43f755507662-kube-api-access-vff9w" (OuterVolumeSpecName: "kube-api-access-vff9w") pod "dbee59b6-8c15-493f-9c2e-43f755507662" (UID: "dbee59b6-8c15-493f-9c2e-43f755507662"). InnerVolumeSpecName "kube-api-access-vff9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:33:43.506565 master-0 kubenswrapper[19803]: I0313 01:33:43.506434 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-util" (OuterVolumeSpecName: "util") pod "dbee59b6-8c15-493f-9c2e-43f755507662" (UID: "dbee59b6-8c15-493f-9c2e-43f755507662"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:33:43.568571 master-0 kubenswrapper[19803]: I0313 01:33:43.568328 19803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:43.568571 master-0 kubenswrapper[19803]: I0313 01:33:43.568417 19803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/dbee59b6-8c15-493f-9c2e-43f755507662-util\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:43.568571 master-0 kubenswrapper[19803]: I0313 01:33:43.568441 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vff9w\" (UniqueName: \"kubernetes.io/projected/dbee59b6-8c15-493f-9c2e-43f755507662-kube-api-access-vff9w\") on node \"master-0\" DevicePath \"\"" Mar 13 01:33:44.030767 master-0 kubenswrapper[19803]: I0313 01:33:44.030716 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" event={"ID":"dbee59b6-8c15-493f-9c2e-43f755507662","Type":"ContainerDied","Data":"77a236ef7fce4231e27416c7bc95d20985e83fc3c17090caec6d85196490a454"} Mar 13 01:33:44.031100 master-0 kubenswrapper[19803]: I0313 01:33:44.031011 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a236ef7fce4231e27416c7bc95d20985e83fc3c17090caec6d85196490a454" Mar 13 01:33:44.031203 master-0 kubenswrapper[19803]: I0313 01:33:44.030791 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/7f6062bfcf66f08711c4d599873349559e66916847a22b4b74a32f97d476z5q" Mar 13 01:33:50.014050 master-0 kubenswrapper[19803]: I0313 01:33:50.013989 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/lvms-operator-5855d99796-5p89t"] Mar 13 01:33:50.014776 master-0 kubenswrapper[19803]: E0313 01:33:50.014289 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbee59b6-8c15-493f-9c2e-43f755507662" containerName="util" Mar 13 01:33:50.014776 master-0 kubenswrapper[19803]: I0313 01:33:50.014305 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbee59b6-8c15-493f-9c2e-43f755507662" containerName="util" Mar 13 01:33:50.014776 master-0 kubenswrapper[19803]: E0313 01:33:50.014331 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbee59b6-8c15-493f-9c2e-43f755507662" containerName="pull" Mar 13 01:33:50.014776 master-0 kubenswrapper[19803]: I0313 01:33:50.014339 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbee59b6-8c15-493f-9c2e-43f755507662" containerName="pull" Mar 13 01:33:50.014776 master-0 kubenswrapper[19803]: E0313 01:33:50.014381 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbee59b6-8c15-493f-9c2e-43f755507662" containerName="extract" Mar 13 01:33:50.014776 master-0 kubenswrapper[19803]: I0313 01:33:50.014390 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbee59b6-8c15-493f-9c2e-43f755507662" containerName="extract" Mar 13 01:33:50.014776 master-0 kubenswrapper[19803]: I0313 01:33:50.014586 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbee59b6-8c15-493f-9c2e-43f755507662" containerName="extract" Mar 13 01:33:50.015158 master-0 kubenswrapper[19803]: I0313 01:33:50.015130 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.018153 master-0 kubenswrapper[19803]: I0313 01:33:50.018113 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-service-cert" Mar 13 01:33:50.020727 master-0 kubenswrapper[19803]: I0313 01:33:50.020685 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"kube-root-ca.crt" Mar 13 01:33:50.021097 master-0 kubenswrapper[19803]: I0313 01:33:50.021078 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-webhook-server-cert" Mar 13 01:33:50.021156 master-0 kubenswrapper[19803]: I0313 01:33:50.021101 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-storage"/"openshift-service-ca.crt" Mar 13 01:33:50.021220 master-0 kubenswrapper[19803]: I0313 01:33:50.021203 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"lvms-operator-metrics-cert" Mar 13 01:33:50.035307 master-0 kubenswrapper[19803]: I0313 01:33:50.035249 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-5855d99796-5p89t"] Mar 13 01:33:50.093655 master-0 kubenswrapper[19803]: I0313 01:33:50.087011 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-metrics-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.093655 master-0 kubenswrapper[19803]: I0313 01:33:50.087099 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94d3f6ed-df21-4254-80b2-4d07bb71930e-socket-dir\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.093655 master-0 kubenswrapper[19803]: I0313 01:33:50.087122 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-apiservice-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.093655 master-0 kubenswrapper[19803]: I0313 01:33:50.087158 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d48sl\" (UniqueName: \"kubernetes.io/projected/94d3f6ed-df21-4254-80b2-4d07bb71930e-kube-api-access-d48sl\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.093655 master-0 kubenswrapper[19803]: I0313 01:33:50.087177 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-webhook-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.188207 master-0 kubenswrapper[19803]: I0313 01:33:50.188122 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-metrics-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.188462 master-0 kubenswrapper[19803]: I0313 01:33:50.188237 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94d3f6ed-df21-4254-80b2-4d07bb71930e-socket-dir\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.188462 master-0 kubenswrapper[19803]: I0313 01:33:50.188260 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-apiservice-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.188860 master-0 kubenswrapper[19803]: I0313 01:33:50.188833 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/94d3f6ed-df21-4254-80b2-4d07bb71930e-socket-dir\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.189294 master-0 kubenswrapper[19803]: I0313 01:33:50.188927 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d48sl\" (UniqueName: \"kubernetes.io/projected/94d3f6ed-df21-4254-80b2-4d07bb71930e-kube-api-access-d48sl\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.189367 master-0 kubenswrapper[19803]: I0313 01:33:50.189321 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-webhook-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.192327 master-0 kubenswrapper[19803]: I0313 01:33:50.192304 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-webhook-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.192726 master-0 kubenswrapper[19803]: I0313 01:33:50.192681 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-metrics-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.198545 master-0 kubenswrapper[19803]: I0313 01:33:50.198501 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/94d3f6ed-df21-4254-80b2-4d07bb71930e-apiservice-cert\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.206552 master-0 kubenswrapper[19803]: I0313 01:33:50.206483 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d48sl\" (UniqueName: \"kubernetes.io/projected/94d3f6ed-df21-4254-80b2-4d07bb71930e-kube-api-access-d48sl\") pod \"lvms-operator-5855d99796-5p89t\" (UID: \"94d3f6ed-df21-4254-80b2-4d07bb71930e\") " pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.330387 master-0 kubenswrapper[19803]: I0313 01:33:50.330259 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:50.786107 master-0 kubenswrapper[19803]: I0313 01:33:50.786019 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/lvms-operator-5855d99796-5p89t"] Mar 13 01:33:50.794291 master-0 kubenswrapper[19803]: W0313 01:33:50.794222 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod94d3f6ed_df21_4254_80b2_4d07bb71930e.slice/crio-92cba854142d9405782f3c037aa95d09f752b41bc6d7ed20582bd5a4cd29b7ff WatchSource:0}: Error finding container 92cba854142d9405782f3c037aa95d09f752b41bc6d7ed20582bd5a4cd29b7ff: Status 404 returned error can't find the container with id 92cba854142d9405782f3c037aa95d09f752b41bc6d7ed20582bd5a4cd29b7ff Mar 13 01:33:51.098627 master-0 kubenswrapper[19803]: I0313 01:33:51.098465 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-5855d99796-5p89t" event={"ID":"94d3f6ed-df21-4254-80b2-4d07bb71930e","Type":"ContainerStarted","Data":"92cba854142d9405782f3c037aa95d09f752b41bc6d7ed20582bd5a4cd29b7ff"} Mar 13 01:33:56.147576 master-0 kubenswrapper[19803]: I0313 01:33:56.147482 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/lvms-operator-5855d99796-5p89t" event={"ID":"94d3f6ed-df21-4254-80b2-4d07bb71930e","Type":"ContainerStarted","Data":"e882a655e2ec8629d0dfc8960e3105b856c928373afcf11731aa84adc9188cd3"} Mar 13 01:33:56.148187 master-0 kubenswrapper[19803]: I0313 01:33:56.147853 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:56.152686 master-0 kubenswrapper[19803]: I0313 01:33:56.152648 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/lvms-operator-5855d99796-5p89t" Mar 13 01:33:56.181086 master-0 kubenswrapper[19803]: I0313 01:33:56.180991 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/lvms-operator-5855d99796-5p89t" podStartSLOduration=2.697888024 podStartE2EDuration="7.180971668s" podCreationTimestamp="2026-03-13 01:33:49 +0000 UTC" firstStartedPulling="2026-03-13 01:33:50.796633113 +0000 UTC m=+978.761780792" lastFinishedPulling="2026-03-13 01:33:55.279716757 +0000 UTC m=+983.244864436" observedRunningTime="2026-03-13 01:33:56.175571897 +0000 UTC m=+984.140719606" watchObservedRunningTime="2026-03-13 01:33:56.180971668 +0000 UTC m=+984.146119367" Mar 13 01:33:59.542073 master-0 kubenswrapper[19803]: I0313 01:33:59.541989 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n"] Mar 13 01:33:59.559995 master-0 kubenswrapper[19803]: I0313 01:33:59.559909 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.580908 master-0 kubenswrapper[19803]: I0313 01:33:59.575437 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n"] Mar 13 01:33:59.656830 master-0 kubenswrapper[19803]: I0313 01:33:59.656689 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9264\" (UniqueName: \"kubernetes.io/projected/53208248-6725-47e0-8dbd-44f4a14cd8dd-kube-api-access-q9264\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.657155 master-0 kubenswrapper[19803]: I0313 01:33:59.656881 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.657155 master-0 kubenswrapper[19803]: I0313 01:33:59.656944 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.758906 master-0 kubenswrapper[19803]: I0313 01:33:59.758789 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9264\" (UniqueName: \"kubernetes.io/projected/53208248-6725-47e0-8dbd-44f4a14cd8dd-kube-api-access-q9264\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.759198 master-0 kubenswrapper[19803]: I0313 01:33:59.758936 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.759198 master-0 kubenswrapper[19803]: I0313 01:33:59.759002 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.759571 master-0 kubenswrapper[19803]: I0313 01:33:59.759503 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-util\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.759691 master-0 kubenswrapper[19803]: I0313 01:33:59.759659 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-bundle\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.775863 master-0 kubenswrapper[19803]: I0313 01:33:59.775819 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9264\" (UniqueName: \"kubernetes.io/projected/53208248-6725-47e0-8dbd-44f4a14cd8dd-kube-api-access-q9264\") pod \"925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:33:59.892879 master-0 kubenswrapper[19803]: I0313 01:33:59.892700 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:34:00.406152 master-0 kubenswrapper[19803]: W0313 01:34:00.406042 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod53208248_6725_47e0_8dbd_44f4a14cd8dd.slice/crio-2f387ab7b5a533e087c2d4de5fdb15361cc3a1e6e1e3ac9b489d67081e62d971 WatchSource:0}: Error finding container 2f387ab7b5a533e087c2d4de5fdb15361cc3a1e6e1e3ac9b489d67081e62d971: Status 404 returned error can't find the container with id 2f387ab7b5a533e087c2d4de5fdb15361cc3a1e6e1e3ac9b489d67081e62d971 Mar 13 01:34:00.406741 master-0 kubenswrapper[19803]: I0313 01:34:00.406339 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n"] Mar 13 01:34:00.940695 master-0 kubenswrapper[19803]: I0313 01:34:00.939842 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz"] Mar 13 01:34:00.941927 master-0 kubenswrapper[19803]: I0313 01:34:00.941896 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:00.954440 master-0 kubenswrapper[19803]: I0313 01:34:00.954379 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz"] Mar 13 01:34:01.084560 master-0 kubenswrapper[19803]: I0313 01:34:01.084462 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.084874 master-0 kubenswrapper[19803]: I0313 01:34:01.084591 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.084874 master-0 kubenswrapper[19803]: I0313 01:34:01.084639 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m68q8\" (UniqueName: \"kubernetes.io/projected/3fd58bf2-4149-40df-8abb-d782378b1e5a-kube-api-access-m68q8\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.186768 master-0 kubenswrapper[19803]: I0313 01:34:01.186689 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.187091 master-0 kubenswrapper[19803]: I0313 01:34:01.186955 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.187091 master-0 kubenswrapper[19803]: I0313 01:34:01.186998 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m68q8\" (UniqueName: \"kubernetes.io/projected/3fd58bf2-4149-40df-8abb-d782378b1e5a-kube-api-access-m68q8\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.187490 master-0 kubenswrapper[19803]: I0313 01:34:01.187445 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-bundle\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.187631 master-0 kubenswrapper[19803]: I0313 01:34:01.187603 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-util\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.207022 master-0 kubenswrapper[19803]: I0313 01:34:01.206878 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m68q8\" (UniqueName: \"kubernetes.io/projected/3fd58bf2-4149-40df-8abb-d782378b1e5a-kube-api-access-m68q8\") pod \"2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.207738 master-0 kubenswrapper[19803]: I0313 01:34:01.207653 19803 generic.go:334] "Generic (PLEG): container finished" podID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerID="a4e8a985e8acdbbbb8c2004ab5fdcde90be641543ec8fd9a6c19e93a629eec0c" exitCode=0 Mar 13 01:34:01.207806 master-0 kubenswrapper[19803]: I0313 01:34:01.207764 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" event={"ID":"53208248-6725-47e0-8dbd-44f4a14cd8dd","Type":"ContainerDied","Data":"a4e8a985e8acdbbbb8c2004ab5fdcde90be641543ec8fd9a6c19e93a629eec0c"} Mar 13 01:34:01.207861 master-0 kubenswrapper[19803]: I0313 01:34:01.207830 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" event={"ID":"53208248-6725-47e0-8dbd-44f4a14cd8dd","Type":"ContainerStarted","Data":"2f387ab7b5a533e087c2d4de5fdb15361cc3a1e6e1e3ac9b489d67081e62d971"} Mar 13 01:34:01.269302 master-0 kubenswrapper[19803]: I0313 01:34:01.269230 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:01.733638 master-0 kubenswrapper[19803]: I0313 01:34:01.733553 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d"] Mar 13 01:34:01.736636 master-0 kubenswrapper[19803]: I0313 01:34:01.736590 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.743880 master-0 kubenswrapper[19803]: I0313 01:34:01.743823 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz"] Mar 13 01:34:01.750957 master-0 kubenswrapper[19803]: I0313 01:34:01.749613 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d"] Mar 13 01:34:01.807776 master-0 kubenswrapper[19803]: I0313 01:34:01.807715 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.807879 master-0 kubenswrapper[19803]: I0313 01:34:01.807853 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q5w8v\" (UniqueName: \"kubernetes.io/projected/63413ca9-9863-4285-b222-544c77cc64a2-kube-api-access-q5w8v\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.807982 master-0 kubenswrapper[19803]: I0313 01:34:01.807924 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.910067 master-0 kubenswrapper[19803]: I0313 01:34:01.910020 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.910802 master-0 kubenswrapper[19803]: I0313 01:34:01.910777 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.911003 master-0 kubenswrapper[19803]: I0313 01:34:01.910980 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q5w8v\" (UniqueName: \"kubernetes.io/projected/63413ca9-9863-4285-b222-544c77cc64a2-kube-api-access-q5w8v\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.911233 master-0 kubenswrapper[19803]: I0313 01:34:01.911170 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-bundle\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.911460 master-0 kubenswrapper[19803]: I0313 01:34:01.911403 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-util\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:01.931191 master-0 kubenswrapper[19803]: I0313 01:34:01.931161 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q5w8v\" (UniqueName: \"kubernetes.io/projected/63413ca9-9863-4285-b222-544c77cc64a2-kube-api-access-q5w8v\") pod \"1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:02.139381 master-0 kubenswrapper[19803]: I0313 01:34:02.139311 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:02.219922 master-0 kubenswrapper[19803]: I0313 01:34:02.218138 19803 generic.go:334] "Generic (PLEG): container finished" podID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerID="456fb8860e70cc587ef1b699a77131e5b0c99e9846490ad21d568fe69391c426" exitCode=0 Mar 13 01:34:02.219922 master-0 kubenswrapper[19803]: I0313 01:34:02.218193 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" event={"ID":"3fd58bf2-4149-40df-8abb-d782378b1e5a","Type":"ContainerDied","Data":"456fb8860e70cc587ef1b699a77131e5b0c99e9846490ad21d568fe69391c426"} Mar 13 01:34:02.219922 master-0 kubenswrapper[19803]: I0313 01:34:02.218225 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" event={"ID":"3fd58bf2-4149-40df-8abb-d782378b1e5a","Type":"ContainerStarted","Data":"34667dcb88a539116dc8110b7d61830403addcccb6891d0032c5b2ee4a1d9130"} Mar 13 01:34:02.577899 master-0 kubenswrapper[19803]: I0313 01:34:02.577820 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d"] Mar 13 01:34:02.586898 master-0 kubenswrapper[19803]: W0313 01:34:02.586829 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63413ca9_9863_4285_b222_544c77cc64a2.slice/crio-a0aa2ab34cf1eb6868bcbb9ca2ad554d765a25ce23fcece53f94c588c750987b WatchSource:0}: Error finding container a0aa2ab34cf1eb6868bcbb9ca2ad554d765a25ce23fcece53f94c588c750987b: Status 404 returned error can't find the container with id a0aa2ab34cf1eb6868bcbb9ca2ad554d765a25ce23fcece53f94c588c750987b Mar 13 01:34:03.227027 master-0 kubenswrapper[19803]: I0313 01:34:03.226887 19803 generic.go:334] "Generic (PLEG): container finished" podID="63413ca9-9863-4285-b222-544c77cc64a2" containerID="280d1fb4396ced8fcae20222ef5e6321e642cde7ba07e7c41836235aae06e2c6" exitCode=0 Mar 13 01:34:03.227027 master-0 kubenswrapper[19803]: I0313 01:34:03.226971 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" event={"ID":"63413ca9-9863-4285-b222-544c77cc64a2","Type":"ContainerDied","Data":"280d1fb4396ced8fcae20222ef5e6321e642cde7ba07e7c41836235aae06e2c6"} Mar 13 01:34:03.227027 master-0 kubenswrapper[19803]: I0313 01:34:03.227030 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" event={"ID":"63413ca9-9863-4285-b222-544c77cc64a2","Type":"ContainerStarted","Data":"a0aa2ab34cf1eb6868bcbb9ca2ad554d765a25ce23fcece53f94c588c750987b"} Mar 13 01:34:05.241821 master-0 kubenswrapper[19803]: I0313 01:34:05.241755 19803 generic.go:334] "Generic (PLEG): container finished" podID="63413ca9-9863-4285-b222-544c77cc64a2" containerID="39a5341d4cfffd53bdbde04f841f2b56d6b9f093c01fc73971ec95d7a7af6431" exitCode=0 Mar 13 01:34:05.242481 master-0 kubenswrapper[19803]: I0313 01:34:05.241848 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" event={"ID":"63413ca9-9863-4285-b222-544c77cc64a2","Type":"ContainerDied","Data":"39a5341d4cfffd53bdbde04f841f2b56d6b9f093c01fc73971ec95d7a7af6431"} Mar 13 01:34:05.248052 master-0 kubenswrapper[19803]: I0313 01:34:05.247985 19803 generic.go:334] "Generic (PLEG): container finished" podID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerID="378f1a7e273a0269856c727def6b49c546222b5716d511a151f7f94a0f21adef" exitCode=0 Mar 13 01:34:05.248131 master-0 kubenswrapper[19803]: I0313 01:34:05.248074 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" event={"ID":"53208248-6725-47e0-8dbd-44f4a14cd8dd","Type":"ContainerDied","Data":"378f1a7e273a0269856c727def6b49c546222b5716d511a151f7f94a0f21adef"} Mar 13 01:34:05.253298 master-0 kubenswrapper[19803]: I0313 01:34:05.253252 19803 generic.go:334] "Generic (PLEG): container finished" podID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerID="7f7bb61cf3cf0d63744392ec7f0d00eea0f09f15165c6600ab1235459a3e925b" exitCode=0 Mar 13 01:34:05.253298 master-0 kubenswrapper[19803]: I0313 01:34:05.253293 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" event={"ID":"3fd58bf2-4149-40df-8abb-d782378b1e5a","Type":"ContainerDied","Data":"7f7bb61cf3cf0d63744392ec7f0d00eea0f09f15165c6600ab1235459a3e925b"} Mar 13 01:34:06.264087 master-0 kubenswrapper[19803]: I0313 01:34:06.264026 19803 generic.go:334] "Generic (PLEG): container finished" podID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerID="65284e1d76dedba97188ea40e3b8279a4e841ff2385228f1bc280bd657e98b20" exitCode=0 Mar 13 01:34:06.264766 master-0 kubenswrapper[19803]: I0313 01:34:06.264685 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" event={"ID":"53208248-6725-47e0-8dbd-44f4a14cd8dd","Type":"ContainerDied","Data":"65284e1d76dedba97188ea40e3b8279a4e841ff2385228f1bc280bd657e98b20"} Mar 13 01:34:06.266745 master-0 kubenswrapper[19803]: I0313 01:34:06.266717 19803 generic.go:334] "Generic (PLEG): container finished" podID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerID="11cf1031cbbaf420ce52b394099a8deb6facd5f0f8bcdb9854624370a31bf31e" exitCode=0 Mar 13 01:34:06.266895 master-0 kubenswrapper[19803]: I0313 01:34:06.266772 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" event={"ID":"3fd58bf2-4149-40df-8abb-d782378b1e5a","Type":"ContainerDied","Data":"11cf1031cbbaf420ce52b394099a8deb6facd5f0f8bcdb9854624370a31bf31e"} Mar 13 01:34:06.269393 master-0 kubenswrapper[19803]: I0313 01:34:06.269355 19803 generic.go:334] "Generic (PLEG): container finished" podID="63413ca9-9863-4285-b222-544c77cc64a2" containerID="cc3f58d75de8390d519bf6e9bda3b185d95755574736680be13f3ec08c3e9e25" exitCode=0 Mar 13 01:34:06.269574 master-0 kubenswrapper[19803]: I0313 01:34:06.269400 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" event={"ID":"63413ca9-9863-4285-b222-544c77cc64a2","Type":"ContainerDied","Data":"cc3f58d75de8390d519bf6e9bda3b185d95755574736680be13f3ec08c3e9e25"} Mar 13 01:34:07.803725 master-0 kubenswrapper[19803]: I0313 01:34:07.803685 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:07.888995 master-0 kubenswrapper[19803]: I0313 01:34:07.888925 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:07.896617 master-0 kubenswrapper[19803]: I0313 01:34:07.896570 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:34:07.961675 master-0 kubenswrapper[19803]: I0313 01:34:07.960767 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-bundle\") pod \"3fd58bf2-4149-40df-8abb-d782378b1e5a\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " Mar 13 01:34:07.961675 master-0 kubenswrapper[19803]: I0313 01:34:07.960823 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-util\") pod \"3fd58bf2-4149-40df-8abb-d782378b1e5a\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " Mar 13 01:34:07.961675 master-0 kubenswrapper[19803]: I0313 01:34:07.960880 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m68q8\" (UniqueName: \"kubernetes.io/projected/3fd58bf2-4149-40df-8abb-d782378b1e5a-kube-api-access-m68q8\") pod \"3fd58bf2-4149-40df-8abb-d782378b1e5a\" (UID: \"3fd58bf2-4149-40df-8abb-d782378b1e5a\") " Mar 13 01:34:07.963559 master-0 kubenswrapper[19803]: I0313 01:34:07.962928 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-bundle" (OuterVolumeSpecName: "bundle") pod "3fd58bf2-4149-40df-8abb-d782378b1e5a" (UID: "3fd58bf2-4149-40df-8abb-d782378b1e5a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:07.964246 master-0 kubenswrapper[19803]: I0313 01:34:07.964111 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fd58bf2-4149-40df-8abb-d782378b1e5a-kube-api-access-m68q8" (OuterVolumeSpecName: "kube-api-access-m68q8") pod "3fd58bf2-4149-40df-8abb-d782378b1e5a" (UID: "3fd58bf2-4149-40df-8abb-d782378b1e5a"). InnerVolumeSpecName "kube-api-access-m68q8". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:34:07.982674 master-0 kubenswrapper[19803]: I0313 01:34:07.982620 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-util" (OuterVolumeSpecName: "util") pod "3fd58bf2-4149-40df-8abb-d782378b1e5a" (UID: "3fd58bf2-4149-40df-8abb-d782378b1e5a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:08.062389 master-0 kubenswrapper[19803]: I0313 01:34:08.062241 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-bundle\") pod \"63413ca9-9863-4285-b222-544c77cc64a2\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " Mar 13 01:34:08.062389 master-0 kubenswrapper[19803]: I0313 01:34:08.062402 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-util\") pod \"63413ca9-9863-4285-b222-544c77cc64a2\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " Mar 13 01:34:08.062826 master-0 kubenswrapper[19803]: I0313 01:34:08.062488 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q9264\" (UniqueName: \"kubernetes.io/projected/53208248-6725-47e0-8dbd-44f4a14cd8dd-kube-api-access-q9264\") pod \"53208248-6725-47e0-8dbd-44f4a14cd8dd\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " Mar 13 01:34:08.063437 master-0 kubenswrapper[19803]: I0313 01:34:08.063384 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-bundle\") pod \"53208248-6725-47e0-8dbd-44f4a14cd8dd\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " Mar 13 01:34:08.063861 master-0 kubenswrapper[19803]: I0313 01:34:08.063397 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-bundle" (OuterVolumeSpecName: "bundle") pod "63413ca9-9863-4285-b222-544c77cc64a2" (UID: "63413ca9-9863-4285-b222-544c77cc64a2"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:08.064103 master-0 kubenswrapper[19803]: I0313 01:34:08.064057 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q5w8v\" (UniqueName: \"kubernetes.io/projected/63413ca9-9863-4285-b222-544c77cc64a2-kube-api-access-q5w8v\") pod \"63413ca9-9863-4285-b222-544c77cc64a2\" (UID: \"63413ca9-9863-4285-b222-544c77cc64a2\") " Mar 13 01:34:08.064419 master-0 kubenswrapper[19803]: I0313 01:34:08.064382 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-util\") pod \"53208248-6725-47e0-8dbd-44f4a14cd8dd\" (UID: \"53208248-6725-47e0-8dbd-44f4a14cd8dd\") " Mar 13 01:34:08.064766 master-0 kubenswrapper[19803]: I0313 01:34:08.064717 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-bundle" (OuterVolumeSpecName: "bundle") pod "53208248-6725-47e0-8dbd-44f4a14cd8dd" (UID: "53208248-6725-47e0-8dbd-44f4a14cd8dd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:08.065741 master-0 kubenswrapper[19803]: I0313 01:34:08.065696 19803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.065992 master-0 kubenswrapper[19803]: I0313 01:34:08.065960 19803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.066168 master-0 kubenswrapper[19803]: I0313 01:34:08.066140 19803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3fd58bf2-4149-40df-8abb-d782378b1e5a-util\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.066355 master-0 kubenswrapper[19803]: I0313 01:34:08.066322 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m68q8\" (UniqueName: \"kubernetes.io/projected/3fd58bf2-4149-40df-8abb-d782378b1e5a-kube-api-access-m68q8\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.066580 master-0 kubenswrapper[19803]: I0313 01:34:08.066547 19803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.068017 master-0 kubenswrapper[19803]: I0313 01:34:08.067966 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53208248-6725-47e0-8dbd-44f4a14cd8dd-kube-api-access-q9264" (OuterVolumeSpecName: "kube-api-access-q9264") pod "53208248-6725-47e0-8dbd-44f4a14cd8dd" (UID: "53208248-6725-47e0-8dbd-44f4a14cd8dd"). InnerVolumeSpecName "kube-api-access-q9264". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:34:08.068335 master-0 kubenswrapper[19803]: I0313 01:34:08.068287 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63413ca9-9863-4285-b222-544c77cc64a2-kube-api-access-q5w8v" (OuterVolumeSpecName: "kube-api-access-q5w8v") pod "63413ca9-9863-4285-b222-544c77cc64a2" (UID: "63413ca9-9863-4285-b222-544c77cc64a2"). InnerVolumeSpecName "kube-api-access-q5w8v". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:34:08.073355 master-0 kubenswrapper[19803]: I0313 01:34:08.073307 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-util" (OuterVolumeSpecName: "util") pod "63413ca9-9863-4285-b222-544c77cc64a2" (UID: "63413ca9-9863-4285-b222-544c77cc64a2"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:08.088204 master-0 kubenswrapper[19803]: I0313 01:34:08.088100 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-util" (OuterVolumeSpecName: "util") pod "53208248-6725-47e0-8dbd-44f4a14cd8dd" (UID: "53208248-6725-47e0-8dbd-44f4a14cd8dd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:08.167482 master-0 kubenswrapper[19803]: I0313 01:34:08.167319 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q5w8v\" (UniqueName: \"kubernetes.io/projected/63413ca9-9863-4285-b222-544c77cc64a2-kube-api-access-q5w8v\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.167482 master-0 kubenswrapper[19803]: I0313 01:34:08.167363 19803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/53208248-6725-47e0-8dbd-44f4a14cd8dd-util\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.167482 master-0 kubenswrapper[19803]: I0313 01:34:08.167376 19803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/63413ca9-9863-4285-b222-544c77cc64a2-util\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.167482 master-0 kubenswrapper[19803]: I0313 01:34:08.167387 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q9264\" (UniqueName: \"kubernetes.io/projected/53208248-6725-47e0-8dbd-44f4a14cd8dd-kube-api-access-q9264\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:08.293835 master-0 kubenswrapper[19803]: I0313 01:34:08.293772 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" Mar 13 01:34:08.294126 master-0 kubenswrapper[19803]: I0313 01:34:08.294061 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/925ad1f05bf386dc21bdfe2f8249c1fbfd04a404dec7a7fb6362d758e5qf49n" event={"ID":"53208248-6725-47e0-8dbd-44f4a14cd8dd","Type":"ContainerDied","Data":"2f387ab7b5a533e087c2d4de5fdb15361cc3a1e6e1e3ac9b489d67081e62d971"} Mar 13 01:34:08.294171 master-0 kubenswrapper[19803]: I0313 01:34:08.294142 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f387ab7b5a533e087c2d4de5fdb15361cc3a1e6e1e3ac9b489d67081e62d971" Mar 13 01:34:08.298089 master-0 kubenswrapper[19803]: I0313 01:34:08.298033 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" event={"ID":"63413ca9-9863-4285-b222-544c77cc64a2","Type":"ContainerDied","Data":"a0aa2ab34cf1eb6868bcbb9ca2ad554d765a25ce23fcece53f94c588c750987b"} Mar 13 01:34:08.298151 master-0 kubenswrapper[19803]: I0313 01:34:08.298087 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1d8741a795bd73341bdd61a6e59c08511cf9466dbb5fc4045ac2dde874n7g7d" Mar 13 01:34:08.298151 master-0 kubenswrapper[19803]: I0313 01:34:08.298108 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a0aa2ab34cf1eb6868bcbb9ca2ad554d765a25ce23fcece53f94c588c750987b" Mar 13 01:34:08.302638 master-0 kubenswrapper[19803]: I0313 01:34:08.302471 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" event={"ID":"3fd58bf2-4149-40df-8abb-d782378b1e5a","Type":"ContainerDied","Data":"34667dcb88a539116dc8110b7d61830403addcccb6891d0032c5b2ee4a1d9130"} Mar 13 01:34:08.302638 master-0 kubenswrapper[19803]: I0313 01:34:08.302578 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34667dcb88a539116dc8110b7d61830403addcccb6891d0032c5b2ee4a1d9130" Mar 13 01:34:08.302938 master-0 kubenswrapper[19803]: I0313 01:34:08.302658 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/2d3ddce10053cc6867b5a0ce1614b30225f3a63fab79a72148165675c1jc9cz" Mar 13 01:34:08.578597 master-0 kubenswrapper[19803]: I0313 01:34:08.578526 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s"] Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: E0313 01:34:08.578861 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerName="extract" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: I0313 01:34:08.578878 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerName="extract" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: E0313 01:34:08.578891 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerName="util" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: I0313 01:34:08.578899 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerName="util" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: E0313 01:34:08.578914 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63413ca9-9863-4285-b222-544c77cc64a2" containerName="pull" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: I0313 01:34:08.578923 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="63413ca9-9863-4285-b222-544c77cc64a2" containerName="pull" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: E0313 01:34:08.578942 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerName="extract" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: I0313 01:34:08.578951 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerName="extract" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: E0313 01:34:08.578979 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63413ca9-9863-4285-b222-544c77cc64a2" containerName="util" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: I0313 01:34:08.578988 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="63413ca9-9863-4285-b222-544c77cc64a2" containerName="util" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: E0313 01:34:08.579006 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerName="pull" Mar 13 01:34:08.579001 master-0 kubenswrapper[19803]: I0313 01:34:08.579015 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerName="pull" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: E0313 01:34:08.579030 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerName="pull" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: I0313 01:34:08.579039 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerName="pull" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: E0313 01:34:08.579052 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63413ca9-9863-4285-b222-544c77cc64a2" containerName="extract" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: I0313 01:34:08.579063 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="63413ca9-9863-4285-b222-544c77cc64a2" containerName="extract" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: E0313 01:34:08.579086 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerName="util" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: I0313 01:34:08.579094 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerName="util" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: I0313 01:34:08.579278 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="53208248-6725-47e0-8dbd-44f4a14cd8dd" containerName="extract" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: I0313 01:34:08.579315 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="63413ca9-9863-4285-b222-544c77cc64a2" containerName="extract" Mar 13 01:34:08.579544 master-0 kubenswrapper[19803]: I0313 01:34:08.579341 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fd58bf2-4149-40df-8abb-d782378b1e5a" containerName="extract" Mar 13 01:34:08.580385 master-0 kubenswrapper[19803]: I0313 01:34:08.580360 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.603847 master-0 kubenswrapper[19803]: I0313 01:34:08.603775 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s"] Mar 13 01:34:08.674176 master-0 kubenswrapper[19803]: I0313 01:34:08.674106 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.674452 master-0 kubenswrapper[19803]: I0313 01:34:08.674207 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.674452 master-0 kubenswrapper[19803]: I0313 01:34:08.674262 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwqpj\" (UniqueName: \"kubernetes.io/projected/f4c6b091-4386-4ea9-9bee-7856b30a2c64-kube-api-access-hwqpj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.775684 master-0 kubenswrapper[19803]: I0313 01:34:08.775570 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.776076 master-0 kubenswrapper[19803]: I0313 01:34:08.775821 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.776076 master-0 kubenswrapper[19803]: I0313 01:34:08.776061 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwqpj\" (UniqueName: \"kubernetes.io/projected/f4c6b091-4386-4ea9-9bee-7856b30a2c64-kube-api-access-hwqpj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.776883 master-0 kubenswrapper[19803]: I0313 01:34:08.776824 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.777185 master-0 kubenswrapper[19803]: I0313 01:34:08.776976 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.791497 master-0 kubenswrapper[19803]: I0313 01:34:08.791410 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwqpj\" (UniqueName: \"kubernetes.io/projected/f4c6b091-4386-4ea9-9bee-7856b30a2c64-kube-api-access-hwqpj\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:08.957816 master-0 kubenswrapper[19803]: I0313 01:34:08.957651 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:09.456670 master-0 kubenswrapper[19803]: I0313 01:34:09.455051 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s"] Mar 13 01:34:09.461788 master-0 kubenswrapper[19803]: W0313 01:34:09.461714 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4c6b091_4386_4ea9_9bee_7856b30a2c64.slice/crio-5d083780f771a4f006517da21ae527c509c7c71d49f4102eec35aac3af3f565c WatchSource:0}: Error finding container 5d083780f771a4f006517da21ae527c509c7c71d49f4102eec35aac3af3f565c: Status 404 returned error can't find the container with id 5d083780f771a4f006517da21ae527c509c7c71d49f4102eec35aac3af3f565c Mar 13 01:34:10.325566 master-0 kubenswrapper[19803]: I0313 01:34:10.325326 19803 generic.go:334] "Generic (PLEG): container finished" podID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerID="6c3f469f48a8fc25d68dab152bc8c854b16fcffb5d7fdb39b35198962460bd34" exitCode=0 Mar 13 01:34:10.326845 master-0 kubenswrapper[19803]: I0313 01:34:10.325959 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" event={"ID":"f4c6b091-4386-4ea9-9bee-7856b30a2c64","Type":"ContainerDied","Data":"6c3f469f48a8fc25d68dab152bc8c854b16fcffb5d7fdb39b35198962460bd34"} Mar 13 01:34:10.326845 master-0 kubenswrapper[19803]: I0313 01:34:10.326009 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" event={"ID":"f4c6b091-4386-4ea9-9bee-7856b30a2c64","Type":"ContainerStarted","Data":"5d083780f771a4f006517da21ae527c509c7c71d49f4102eec35aac3af3f565c"} Mar 13 01:34:12.340422 master-0 kubenswrapper[19803]: I0313 01:34:12.340357 19803 generic.go:334] "Generic (PLEG): container finished" podID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerID="c815a5374b021b4fee6b25bc029e652b9507e9b754fa01e1ea984379408b500a" exitCode=0 Mar 13 01:34:12.340422 master-0 kubenswrapper[19803]: I0313 01:34:12.340413 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" event={"ID":"f4c6b091-4386-4ea9-9bee-7856b30a2c64","Type":"ContainerDied","Data":"c815a5374b021b4fee6b25bc029e652b9507e9b754fa01e1ea984379408b500a"} Mar 13 01:34:13.354759 master-0 kubenswrapper[19803]: I0313 01:34:13.354593 19803 generic.go:334] "Generic (PLEG): container finished" podID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerID="28514692a39f196d97a4267a2b729472a91960f266b5a4831f68ed4a811c8404" exitCode=0 Mar 13 01:34:13.354759 master-0 kubenswrapper[19803]: I0313 01:34:13.354692 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" event={"ID":"f4c6b091-4386-4ea9-9bee-7856b30a2c64","Type":"ContainerDied","Data":"28514692a39f196d97a4267a2b729472a91960f266b5a4831f68ed4a811c8404"} Mar 13 01:34:14.684612 master-0 kubenswrapper[19803]: I0313 01:34:14.684261 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff"] Mar 13 01:34:14.694387 master-0 kubenswrapper[19803]: I0313 01:34:14.685286 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:14.694387 master-0 kubenswrapper[19803]: I0313 01:34:14.694076 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Mar 13 01:34:14.694387 master-0 kubenswrapper[19803]: I0313 01:34:14.694340 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Mar 13 01:34:14.708537 master-0 kubenswrapper[19803]: I0313 01:34:14.705299 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff"] Mar 13 01:34:14.733561 master-0 kubenswrapper[19803]: I0313 01:34:14.727642 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de787d51-fc62-46c0-872b-2f13a312ce81-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-zpmff\" (UID: \"de787d51-fc62-46c0-872b-2f13a312ce81\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:14.733561 master-0 kubenswrapper[19803]: I0313 01:34:14.727792 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ss6t9\" (UniqueName: \"kubernetes.io/projected/de787d51-fc62-46c0-872b-2f13a312ce81-kube-api-access-ss6t9\") pod \"cert-manager-operator-controller-manager-66c8bdd694-zpmff\" (UID: \"de787d51-fc62-46c0-872b-2f13a312ce81\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:14.822019 master-0 kubenswrapper[19803]: I0313 01:34:14.821977 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:14.830488 master-0 kubenswrapper[19803]: I0313 01:34:14.830411 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de787d51-fc62-46c0-872b-2f13a312ce81-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-zpmff\" (UID: \"de787d51-fc62-46c0-872b-2f13a312ce81\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:14.830823 master-0 kubenswrapper[19803]: I0313 01:34:14.830576 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ss6t9\" (UniqueName: \"kubernetes.io/projected/de787d51-fc62-46c0-872b-2f13a312ce81-kube-api-access-ss6t9\") pod \"cert-manager-operator-controller-manager-66c8bdd694-zpmff\" (UID: \"de787d51-fc62-46c0-872b-2f13a312ce81\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:14.831360 master-0 kubenswrapper[19803]: I0313 01:34:14.831297 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/de787d51-fc62-46c0-872b-2f13a312ce81-tmp\") pod \"cert-manager-operator-controller-manager-66c8bdd694-zpmff\" (UID: \"de787d51-fc62-46c0-872b-2f13a312ce81\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:14.850837 master-0 kubenswrapper[19803]: I0313 01:34:14.850756 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ss6t9\" (UniqueName: \"kubernetes.io/projected/de787d51-fc62-46c0-872b-2f13a312ce81-kube-api-access-ss6t9\") pod \"cert-manager-operator-controller-manager-66c8bdd694-zpmff\" (UID: \"de787d51-fc62-46c0-872b-2f13a312ce81\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:14.931744 master-0 kubenswrapper[19803]: I0313 01:34:14.931668 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwqpj\" (UniqueName: \"kubernetes.io/projected/f4c6b091-4386-4ea9-9bee-7856b30a2c64-kube-api-access-hwqpj\") pod \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " Mar 13 01:34:14.931744 master-0 kubenswrapper[19803]: I0313 01:34:14.931754 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-bundle\") pod \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " Mar 13 01:34:14.932069 master-0 kubenswrapper[19803]: I0313 01:34:14.931831 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-util\") pod \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\" (UID: \"f4c6b091-4386-4ea9-9bee-7856b30a2c64\") " Mar 13 01:34:14.935601 master-0 kubenswrapper[19803]: I0313 01:34:14.934330 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-bundle" (OuterVolumeSpecName: "bundle") pod "f4c6b091-4386-4ea9-9bee-7856b30a2c64" (UID: "f4c6b091-4386-4ea9-9bee-7856b30a2c64"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:14.941090 master-0 kubenswrapper[19803]: I0313 01:34:14.940982 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4c6b091-4386-4ea9-9bee-7856b30a2c64-kube-api-access-hwqpj" (OuterVolumeSpecName: "kube-api-access-hwqpj") pod "f4c6b091-4386-4ea9-9bee-7856b30a2c64" (UID: "f4c6b091-4386-4ea9-9bee-7856b30a2c64"). InnerVolumeSpecName "kube-api-access-hwqpj". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:34:14.951598 master-0 kubenswrapper[19803]: I0313 01:34:14.951529 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-util" (OuterVolumeSpecName: "util") pod "f4c6b091-4386-4ea9-9bee-7856b30a2c64" (UID: "f4c6b091-4386-4ea9-9bee-7856b30a2c64"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Mar 13 01:34:15.033756 master-0 kubenswrapper[19803]: I0313 01:34:15.033692 19803 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-util\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:15.033756 master-0 kubenswrapper[19803]: I0313 01:34:15.033743 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwqpj\" (UniqueName: \"kubernetes.io/projected/f4c6b091-4386-4ea9-9bee-7856b30a2c64-kube-api-access-hwqpj\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:15.033756 master-0 kubenswrapper[19803]: I0313 01:34:15.033756 19803 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f4c6b091-4386-4ea9-9bee-7856b30a2c64-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:34:15.114548 master-0 kubenswrapper[19803]: I0313 01:34:15.114453 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" Mar 13 01:34:15.385888 master-0 kubenswrapper[19803]: I0313 01:34:15.385825 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" event={"ID":"f4c6b091-4386-4ea9-9bee-7856b30a2c64","Type":"ContainerDied","Data":"5d083780f771a4f006517da21ae527c509c7c71d49f4102eec35aac3af3f565c"} Mar 13 01:34:15.385888 master-0 kubenswrapper[19803]: I0313 01:34:15.385887 19803 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d083780f771a4f006517da21ae527c509c7c71d49f4102eec35aac3af3f565c" Mar 13 01:34:15.388597 master-0 kubenswrapper[19803]: I0313 01:34:15.386256 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08gjb8s" Mar 13 01:34:15.588326 master-0 kubenswrapper[19803]: I0313 01:34:15.588240 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff"] Mar 13 01:34:16.398630 master-0 kubenswrapper[19803]: I0313 01:34:16.396589 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" event={"ID":"de787d51-fc62-46c0-872b-2f13a312ce81","Type":"ContainerStarted","Data":"d3ee26dcb5152f93748a0f91ca27319fbb0d87d4b86eb15194b9a76d7197388f"} Mar 13 01:34:20.443863 master-0 kubenswrapper[19803]: I0313 01:34:20.443802 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" event={"ID":"de787d51-fc62-46c0-872b-2f13a312ce81","Type":"ContainerStarted","Data":"277b7ca5071b6493f1e22c206206e63d0c53572a63cfc5c32a93314eec94eb0d"} Mar 13 01:34:20.484329 master-0 kubenswrapper[19803]: I0313 01:34:20.484258 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-66c8bdd694-zpmff" podStartSLOduration=2.831920687 podStartE2EDuration="6.484238526s" podCreationTimestamp="2026-03-13 01:34:14 +0000 UTC" firstStartedPulling="2026-03-13 01:34:15.604122137 +0000 UTC m=+1003.569269816" lastFinishedPulling="2026-03-13 01:34:19.256439976 +0000 UTC m=+1007.221587655" observedRunningTime="2026-03-13 01:34:20.478697179 +0000 UTC m=+1008.443844878" watchObservedRunningTime="2026-03-13 01:34:20.484238526 +0000 UTC m=+1008.449386205" Mar 13 01:34:23.500573 master-0 kubenswrapper[19803]: I0313 01:34:23.500455 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-5rwl7"] Mar 13 01:34:23.501375 master-0 kubenswrapper[19803]: E0313 01:34:23.500879 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerName="util" Mar 13 01:34:23.501375 master-0 kubenswrapper[19803]: I0313 01:34:23.500899 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerName="util" Mar 13 01:34:23.501375 master-0 kubenswrapper[19803]: E0313 01:34:23.500935 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerName="pull" Mar 13 01:34:23.501375 master-0 kubenswrapper[19803]: I0313 01:34:23.500942 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerName="pull" Mar 13 01:34:23.501375 master-0 kubenswrapper[19803]: E0313 01:34:23.500951 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerName="extract" Mar 13 01:34:23.501375 master-0 kubenswrapper[19803]: I0313 01:34:23.500958 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerName="extract" Mar 13 01:34:23.501375 master-0 kubenswrapper[19803]: I0313 01:34:23.501118 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4c6b091-4386-4ea9-9bee-7856b30a2c64" containerName="extract" Mar 13 01:34:23.501734 master-0 kubenswrapper[19803]: I0313 01:34:23.501705 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:23.503855 master-0 kubenswrapper[19803]: I0313 01:34:23.503771 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Mar 13 01:34:23.505338 master-0 kubenswrapper[19803]: I0313 01:34:23.505292 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Mar 13 01:34:23.518103 master-0 kubenswrapper[19803]: I0313 01:34:23.518039 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-5rwl7"] Mar 13 01:34:23.604437 master-0 kubenswrapper[19803]: I0313 01:34:23.604305 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9db962b8-555d-43be-8bc0-91bd58d8a9cc-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-5rwl7\" (UID: \"9db962b8-555d-43be-8bc0-91bd58d8a9cc\") " pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:23.605486 master-0 kubenswrapper[19803]: I0313 01:34:23.604882 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9pb67\" (UniqueName: \"kubernetes.io/projected/9db962b8-555d-43be-8bc0-91bd58d8a9cc-kube-api-access-9pb67\") pod \"cert-manager-webhook-6888856db4-5rwl7\" (UID: \"9db962b8-555d-43be-8bc0-91bd58d8a9cc\") " pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:23.706652 master-0 kubenswrapper[19803]: I0313 01:34:23.706573 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9db962b8-555d-43be-8bc0-91bd58d8a9cc-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-5rwl7\" (UID: \"9db962b8-555d-43be-8bc0-91bd58d8a9cc\") " pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:23.707141 master-0 kubenswrapper[19803]: I0313 01:34:23.707123 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9pb67\" (UniqueName: \"kubernetes.io/projected/9db962b8-555d-43be-8bc0-91bd58d8a9cc-kube-api-access-9pb67\") pod \"cert-manager-webhook-6888856db4-5rwl7\" (UID: \"9db962b8-555d-43be-8bc0-91bd58d8a9cc\") " pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:23.734203 master-0 kubenswrapper[19803]: I0313 01:34:23.734140 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9db962b8-555d-43be-8bc0-91bd58d8a9cc-bound-sa-token\") pod \"cert-manager-webhook-6888856db4-5rwl7\" (UID: \"9db962b8-555d-43be-8bc0-91bd58d8a9cc\") " pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:23.734930 master-0 kubenswrapper[19803]: I0313 01:34:23.734873 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9pb67\" (UniqueName: \"kubernetes.io/projected/9db962b8-555d-43be-8bc0-91bd58d8a9cc-kube-api-access-9pb67\") pod \"cert-manager-webhook-6888856db4-5rwl7\" (UID: \"9db962b8-555d-43be-8bc0-91bd58d8a9cc\") " pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:23.826862 master-0 kubenswrapper[19803]: I0313 01:34:23.826718 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:24.347147 master-0 kubenswrapper[19803]: I0313 01:34:24.347074 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-6888856db4-5rwl7"] Mar 13 01:34:24.480430 master-0 kubenswrapper[19803]: I0313 01:34:24.480356 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" event={"ID":"9db962b8-555d-43be-8bc0-91bd58d8a9cc","Type":"ContainerStarted","Data":"1d2edcf0d4694ac4d2851c9aee87c5a8acc325c7509d179e8032b3d04e15021f"} Mar 13 01:34:24.707794 master-0 kubenswrapper[19803]: I0313 01:34:24.707629 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-r9l6s"] Mar 13 01:34:24.708692 master-0 kubenswrapper[19803]: I0313 01:34:24.708648 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:24.741267 master-0 kubenswrapper[19803]: I0313 01:34:24.741172 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-r9l6s"] Mar 13 01:34:24.841672 master-0 kubenswrapper[19803]: I0313 01:34:24.841571 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/895bac03-aaa0-46e5-a41f-ba1f2b6c5793-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-r9l6s\" (UID: \"895bac03-aaa0-46e5-a41f-ba1f2b6c5793\") " pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:24.841672 master-0 kubenswrapper[19803]: I0313 01:34:24.841651 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rh6w\" (UniqueName: \"kubernetes.io/projected/895bac03-aaa0-46e5-a41f-ba1f2b6c5793-kube-api-access-4rh6w\") pod \"cert-manager-cainjector-5545bd876-r9l6s\" (UID: \"895bac03-aaa0-46e5-a41f-ba1f2b6c5793\") " pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:24.942730 master-0 kubenswrapper[19803]: I0313 01:34:24.942629 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/895bac03-aaa0-46e5-a41f-ba1f2b6c5793-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-r9l6s\" (UID: \"895bac03-aaa0-46e5-a41f-ba1f2b6c5793\") " pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:24.943112 master-0 kubenswrapper[19803]: I0313 01:34:24.942869 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4rh6w\" (UniqueName: \"kubernetes.io/projected/895bac03-aaa0-46e5-a41f-ba1f2b6c5793-kube-api-access-4rh6w\") pod \"cert-manager-cainjector-5545bd876-r9l6s\" (UID: \"895bac03-aaa0-46e5-a41f-ba1f2b6c5793\") " pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:24.961001 master-0 kubenswrapper[19803]: I0313 01:34:24.960868 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4rh6w\" (UniqueName: \"kubernetes.io/projected/895bac03-aaa0-46e5-a41f-ba1f2b6c5793-kube-api-access-4rh6w\") pod \"cert-manager-cainjector-5545bd876-r9l6s\" (UID: \"895bac03-aaa0-46e5-a41f-ba1f2b6c5793\") " pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:24.970540 master-0 kubenswrapper[19803]: I0313 01:34:24.967440 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/895bac03-aaa0-46e5-a41f-ba1f2b6c5793-bound-sa-token\") pod \"cert-manager-cainjector-5545bd876-r9l6s\" (UID: \"895bac03-aaa0-46e5-a41f-ba1f2b6c5793\") " pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:25.028135 master-0 kubenswrapper[19803]: I0313 01:34:25.028054 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" Mar 13 01:34:25.488495 master-0 kubenswrapper[19803]: I0313 01:34:25.488406 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-5545bd876-r9l6s"] Mar 13 01:34:26.499546 master-0 kubenswrapper[19803]: I0313 01:34:26.499455 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" event={"ID":"895bac03-aaa0-46e5-a41f-ba1f2b6c5793","Type":"ContainerStarted","Data":"5596e9f14bd03a6972a7aab1bc123fcd1d35216e862ca7c05fdf3c083d4c2213"} Mar 13 01:34:27.798234 master-0 kubenswrapper[19803]: I0313 01:34:27.798163 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j"] Mar 13 01:34:27.809560 master-0 kubenswrapper[19803]: I0313 01:34:27.805940 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" Mar 13 01:34:27.809560 master-0 kubenswrapper[19803]: I0313 01:34:27.808483 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Mar 13 01:34:27.812577 master-0 kubenswrapper[19803]: I0313 01:34:27.812169 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Mar 13 01:34:27.819946 master-0 kubenswrapper[19803]: I0313 01:34:27.818658 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j"] Mar 13 01:34:27.907899 master-0 kubenswrapper[19803]: I0313 01:34:27.907838 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8mtj\" (UniqueName: \"kubernetes.io/projected/c7c96cc6-98a5-467b-aed4-c50790caa51e-kube-api-access-n8mtj\") pod \"nmstate-operator-796d4cfff4-rjb7j\" (UID: \"c7c96cc6-98a5-467b-aed4-c50790caa51e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" Mar 13 01:34:28.009521 master-0 kubenswrapper[19803]: I0313 01:34:28.009425 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8mtj\" (UniqueName: \"kubernetes.io/projected/c7c96cc6-98a5-467b-aed4-c50790caa51e-kube-api-access-n8mtj\") pod \"nmstate-operator-796d4cfff4-rjb7j\" (UID: \"c7c96cc6-98a5-467b-aed4-c50790caa51e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" Mar 13 01:34:28.025810 master-0 kubenswrapper[19803]: I0313 01:34:28.025746 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8mtj\" (UniqueName: \"kubernetes.io/projected/c7c96cc6-98a5-467b-aed4-c50790caa51e-kube-api-access-n8mtj\") pod \"nmstate-operator-796d4cfff4-rjb7j\" (UID: \"c7c96cc6-98a5-467b-aed4-c50790caa51e\") " pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" Mar 13 01:34:28.127099 master-0 kubenswrapper[19803]: I0313 01:34:28.126954 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" Mar 13 01:34:31.201621 master-0 kubenswrapper[19803]: I0313 01:34:31.201571 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j"] Mar 13 01:34:31.580709 master-0 kubenswrapper[19803]: I0313 01:34:31.579720 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" event={"ID":"c7c96cc6-98a5-467b-aed4-c50790caa51e","Type":"ContainerStarted","Data":"d1e2f41195b275de717f033b3d90c59979c8935e5f8ddf9a6c1c27c6a76e9098"} Mar 13 01:34:32.419865 master-0 kubenswrapper[19803]: I0313 01:34:32.419812 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j"] Mar 13 01:34:32.420901 master-0 kubenswrapper[19803]: I0313 01:34:32.420876 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.423331 master-0 kubenswrapper[19803]: I0313 01:34:32.423302 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Mar 13 01:34:32.424134 master-0 kubenswrapper[19803]: I0313 01:34:32.424094 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Mar 13 01:34:32.424412 master-0 kubenswrapper[19803]: I0313 01:34:32.424380 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Mar 13 01:34:32.425786 master-0 kubenswrapper[19803]: I0313 01:34:32.425756 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Mar 13 01:34:32.446982 master-0 kubenswrapper[19803]: I0313 01:34:32.445872 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j"] Mar 13 01:34:32.514724 master-0 kubenswrapper[19803]: I0313 01:34:32.514650 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-webhook-cert\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.514995 master-0 kubenswrapper[19803]: I0313 01:34:32.514913 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5lnv\" (UniqueName: \"kubernetes.io/projected/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-kube-api-access-n5lnv\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.515269 master-0 kubenswrapper[19803]: I0313 01:34:32.515232 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-apiservice-cert\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.604025 master-0 kubenswrapper[19803]: I0313 01:34:32.603956 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" event={"ID":"9db962b8-555d-43be-8bc0-91bd58d8a9cc","Type":"ContainerStarted","Data":"763f51202dbe6a9fc29ca766efc454c65bdb8a80a880f83e74dfb8e386df3b90"} Mar 13 01:34:32.605172 master-0 kubenswrapper[19803]: I0313 01:34:32.605142 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:32.610065 master-0 kubenswrapper[19803]: I0313 01:34:32.610019 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" event={"ID":"895bac03-aaa0-46e5-a41f-ba1f2b6c5793","Type":"ContainerStarted","Data":"e3128956fde8c2e392de452c2d13b38d79ffcb5cb228ad6e4753eb44936e103e"} Mar 13 01:34:32.620541 master-0 kubenswrapper[19803]: I0313 01:34:32.618480 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-apiservice-cert\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.620541 master-0 kubenswrapper[19803]: I0313 01:34:32.618638 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-webhook-cert\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.620541 master-0 kubenswrapper[19803]: I0313 01:34:32.619314 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5lnv\" (UniqueName: \"kubernetes.io/projected/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-kube-api-access-n5lnv\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.624536 master-0 kubenswrapper[19803]: I0313 01:34:32.623994 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-apiservice-cert\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.636360 master-0 kubenswrapper[19803]: I0313 01:34:32.636310 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-webhook-cert\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.648545 master-0 kubenswrapper[19803]: I0313 01:34:32.647491 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5lnv\" (UniqueName: \"kubernetes.io/projected/974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6-kube-api-access-n5lnv\") pod \"metallb-operator-controller-manager-6984bbdf9-qw42j\" (UID: \"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6\") " pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:32.666529 master-0 kubenswrapper[19803]: I0313 01:34:32.663973 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" podStartSLOduration=2.466926657 podStartE2EDuration="9.663947625s" podCreationTimestamp="2026-03-13 01:34:23 +0000 UTC" firstStartedPulling="2026-03-13 01:34:24.339701955 +0000 UTC m=+1012.304849634" lastFinishedPulling="2026-03-13 01:34:31.536722923 +0000 UTC m=+1019.501870602" observedRunningTime="2026-03-13 01:34:32.651847608 +0000 UTC m=+1020.616995277" watchObservedRunningTime="2026-03-13 01:34:32.663947625 +0000 UTC m=+1020.629095314" Mar 13 01:34:32.732638 master-0 kubenswrapper[19803]: I0313 01:34:32.731503 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-5545bd876-r9l6s" podStartSLOduration=2.649841211 podStartE2EDuration="8.731473738s" podCreationTimestamp="2026-03-13 01:34:24 +0000 UTC" firstStartedPulling="2026-03-13 01:34:25.490064055 +0000 UTC m=+1013.455211744" lastFinishedPulling="2026-03-13 01:34:31.571696592 +0000 UTC m=+1019.536844271" observedRunningTime="2026-03-13 01:34:32.686553481 +0000 UTC m=+1020.651701160" watchObservedRunningTime="2026-03-13 01:34:32.731473738 +0000 UTC m=+1020.696621427" Mar 13 01:34:32.737955 master-0 kubenswrapper[19803]: I0313 01:34:32.737761 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:33.198408 master-0 kubenswrapper[19803]: I0313 01:34:33.197814 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2"] Mar 13 01:34:33.199365 master-0 kubenswrapper[19803]: I0313 01:34:33.199305 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.230541 master-0 kubenswrapper[19803]: I0313 01:34:33.227006 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 01:34:33.230541 master-0 kubenswrapper[19803]: I0313 01:34:33.227290 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Mar 13 01:34:33.230541 master-0 kubenswrapper[19803]: I0313 01:34:33.229838 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36c14515-2f07-46ad-a5cd-1e81ccb8506e-apiservice-cert\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.230541 master-0 kubenswrapper[19803]: I0313 01:34:33.229896 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8rzvp\" (UniqueName: \"kubernetes.io/projected/36c14515-2f07-46ad-a5cd-1e81ccb8506e-kube-api-access-8rzvp\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.230541 master-0 kubenswrapper[19803]: I0313 01:34:33.229943 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36c14515-2f07-46ad-a5cd-1e81ccb8506e-webhook-cert\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.275538 master-0 kubenswrapper[19803]: I0313 01:34:33.272110 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2"] Mar 13 01:34:33.331721 master-0 kubenswrapper[19803]: I0313 01:34:33.331665 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j"] Mar 13 01:34:33.332036 master-0 kubenswrapper[19803]: I0313 01:34:33.332005 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36c14515-2f07-46ad-a5cd-1e81ccb8506e-apiservice-cert\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.332157 master-0 kubenswrapper[19803]: I0313 01:34:33.332135 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8rzvp\" (UniqueName: \"kubernetes.io/projected/36c14515-2f07-46ad-a5cd-1e81ccb8506e-kube-api-access-8rzvp\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.332263 master-0 kubenswrapper[19803]: I0313 01:34:33.332248 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36c14515-2f07-46ad-a5cd-1e81ccb8506e-webhook-cert\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.346778 master-0 kubenswrapper[19803]: I0313 01:34:33.346752 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/36c14515-2f07-46ad-a5cd-1e81ccb8506e-webhook-cert\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.348687 master-0 kubenswrapper[19803]: I0313 01:34:33.348645 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/36c14515-2f07-46ad-a5cd-1e81ccb8506e-apiservice-cert\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.357243 master-0 kubenswrapper[19803]: I0313 01:34:33.357212 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8rzvp\" (UniqueName: \"kubernetes.io/projected/36c14515-2f07-46ad-a5cd-1e81ccb8506e-kube-api-access-8rzvp\") pod \"metallb-operator-webhook-server-6c89d777d4-h7xf2\" (UID: \"36c14515-2f07-46ad-a5cd-1e81ccb8506e\") " pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.590433 master-0 kubenswrapper[19803]: I0313 01:34:33.590378 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:33.645692 master-0 kubenswrapper[19803]: I0313 01:34:33.645574 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" event={"ID":"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6","Type":"ContainerStarted","Data":"5c19a86e2a679697f1457c3221884562fd37cfd0927e650bfeb7145a7853286a"} Mar 13 01:34:34.074179 master-0 kubenswrapper[19803]: I0313 01:34:34.074116 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2"] Mar 13 01:34:34.087662 master-0 kubenswrapper[19803]: W0313 01:34:34.087608 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36c14515_2f07_46ad_a5cd_1e81ccb8506e.slice/crio-d92c56858e3ba87833e17d088224ac68fce6fb92d91c344a96fa872e75097c93 WatchSource:0}: Error finding container d92c56858e3ba87833e17d088224ac68fce6fb92d91c344a96fa872e75097c93: Status 404 returned error can't find the container with id d92c56858e3ba87833e17d088224ac68fce6fb92d91c344a96fa872e75097c93 Mar 13 01:34:34.674539 master-0 kubenswrapper[19803]: I0313 01:34:34.674129 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" event={"ID":"36c14515-2f07-46ad-a5cd-1e81ccb8506e","Type":"ContainerStarted","Data":"d92c56858e3ba87833e17d088224ac68fce6fb92d91c344a96fa872e75097c93"} Mar 13 01:34:37.756421 master-0 kubenswrapper[19803]: I0313 01:34:37.756339 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" event={"ID":"c7c96cc6-98a5-467b-aed4-c50790caa51e","Type":"ContainerStarted","Data":"b960d490587a21c32d67c2da4531427694974cf4e9fe25e46575a40b98afc04d"} Mar 13 01:34:37.810267 master-0 kubenswrapper[19803]: I0313 01:34:37.809033 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-796d4cfff4-rjb7j" podStartSLOduration=4.875469482 podStartE2EDuration="10.808992675s" podCreationTimestamp="2026-03-13 01:34:27 +0000 UTC" firstStartedPulling="2026-03-13 01:34:31.509673085 +0000 UTC m=+1019.474820754" lastFinishedPulling="2026-03-13 01:34:37.443196258 +0000 UTC m=+1025.408343947" observedRunningTime="2026-03-13 01:34:37.78862811 +0000 UTC m=+1025.753775799" watchObservedRunningTime="2026-03-13 01:34:37.808992675 +0000 UTC m=+1025.774140344" Mar 13 01:34:38.838714 master-0 kubenswrapper[19803]: I0313 01:34:38.838652 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-6888856db4-5rwl7" Mar 13 01:34:41.436257 master-0 kubenswrapper[19803]: I0313 01:34:41.435788 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw"] Mar 13 01:34:41.439784 master-0 kubenswrapper[19803]: I0313 01:34:41.437048 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" Mar 13 01:34:41.443441 master-0 kubenswrapper[19803]: I0313 01:34:41.443378 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Mar 13 01:34:41.447542 master-0 kubenswrapper[19803]: I0313 01:34:41.443787 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Mar 13 01:34:41.458535 master-0 kubenswrapper[19803]: I0313 01:34:41.454153 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw"] Mar 13 01:34:41.557538 master-0 kubenswrapper[19803]: I0313 01:34:41.556567 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bm25\" (UniqueName: \"kubernetes.io/projected/e79a1bba-9fe7-4f9f-ad48-bb3910e54bff-kube-api-access-6bm25\") pod \"obo-prometheus-operator-68bc856cb9-pqxsw\" (UID: \"e79a1bba-9fe7-4f9f-ad48-bb3910e54bff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" Mar 13 01:34:41.636226 master-0 kubenswrapper[19803]: I0313 01:34:41.636160 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m"] Mar 13 01:34:41.640533 master-0 kubenswrapper[19803]: I0313 01:34:41.637467 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:41.642999 master-0 kubenswrapper[19803]: I0313 01:34:41.641137 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Mar 13 01:34:41.653541 master-0 kubenswrapper[19803]: I0313 01:34:41.653288 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w"] Mar 13 01:34:41.658620 master-0 kubenswrapper[19803]: I0313 01:34:41.654408 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:41.662732 master-0 kubenswrapper[19803]: I0313 01:34:41.661936 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m"] Mar 13 01:34:41.662732 master-0 kubenswrapper[19803]: I0313 01:34:41.662247 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6bm25\" (UniqueName: \"kubernetes.io/projected/e79a1bba-9fe7-4f9f-ad48-bb3910e54bff-kube-api-access-6bm25\") pod \"obo-prometheus-operator-68bc856cb9-pqxsw\" (UID: \"e79a1bba-9fe7-4f9f-ad48-bb3910e54bff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" Mar 13 01:34:41.766533 master-0 kubenswrapper[19803]: I0313 01:34:41.763396 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5db336c1-1122-4d84-82d3-84594c981aa8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-4d62m\" (UID: \"5db336c1-1122-4d84-82d3-84594c981aa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:41.766533 master-0 kubenswrapper[19803]: I0313 01:34:41.763474 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5db336c1-1122-4d84-82d3-84594c981aa8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-4d62m\" (UID: \"5db336c1-1122-4d84-82d3-84594c981aa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:41.766533 master-0 kubenswrapper[19803]: I0313 01:34:41.763579 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04810dc8-d0d3-4b51-961d-a994763bae58-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-km66w\" (UID: \"04810dc8-d0d3-4b51-961d-a994763bae58\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:41.766533 master-0 kubenswrapper[19803]: I0313 01:34:41.763620 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04810dc8-d0d3-4b51-961d-a994763bae58-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-km66w\" (UID: \"04810dc8-d0d3-4b51-961d-a994763bae58\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:41.865843 master-0 kubenswrapper[19803]: I0313 01:34:41.865157 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04810dc8-d0d3-4b51-961d-a994763bae58-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-km66w\" (UID: \"04810dc8-d0d3-4b51-961d-a994763bae58\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:41.866116 master-0 kubenswrapper[19803]: I0313 01:34:41.865858 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04810dc8-d0d3-4b51-961d-a994763bae58-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-km66w\" (UID: \"04810dc8-d0d3-4b51-961d-a994763bae58\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:41.866116 master-0 kubenswrapper[19803]: I0313 01:34:41.865954 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5db336c1-1122-4d84-82d3-84594c981aa8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-4d62m\" (UID: \"5db336c1-1122-4d84-82d3-84594c981aa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:41.866116 master-0 kubenswrapper[19803]: I0313 01:34:41.865989 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5db336c1-1122-4d84-82d3-84594c981aa8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-4d62m\" (UID: \"5db336c1-1122-4d84-82d3-84594c981aa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:41.869030 master-0 kubenswrapper[19803]: I0313 01:34:41.868956 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/04810dc8-d0d3-4b51-961d-a994763bae58-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-km66w\" (UID: \"04810dc8-d0d3-4b51-961d-a994763bae58\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:41.869289 master-0 kubenswrapper[19803]: I0313 01:34:41.869258 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/5db336c1-1122-4d84-82d3-84594c981aa8-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-4d62m\" (UID: \"5db336c1-1122-4d84-82d3-84594c981aa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:41.869673 master-0 kubenswrapper[19803]: I0313 01:34:41.869650 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/04810dc8-d0d3-4b51-961d-a994763bae58-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-km66w\" (UID: \"04810dc8-d0d3-4b51-961d-a994763bae58\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:41.870392 master-0 kubenswrapper[19803]: I0313 01:34:41.870346 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/5db336c1-1122-4d84-82d3-84594c981aa8-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-85989999bf-4d62m\" (UID: \"5db336c1-1122-4d84-82d3-84594c981aa8\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:41.999088 master-0 kubenswrapper[19803]: I0313 01:34:41.998980 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" Mar 13 01:34:42.010428 master-0 kubenswrapper[19803]: I0313 01:34:42.010385 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" Mar 13 01:34:42.054599 master-0 kubenswrapper[19803]: I0313 01:34:42.052665 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w"] Mar 13 01:34:42.415389 master-0 kubenswrapper[19803]: I0313 01:34:42.415142 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-545d4d4674-56pfb"] Mar 13 01:34:42.417995 master-0 kubenswrapper[19803]: I0313 01:34:42.417954 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:42.503330 master-0 kubenswrapper[19803]: I0313 01:34:42.503242 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2699d1bb-8aa6-4f12-b578-93e566b6340d-bound-sa-token\") pod \"cert-manager-545d4d4674-56pfb\" (UID: \"2699d1bb-8aa6-4f12-b578-93e566b6340d\") " pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:42.504128 master-0 kubenswrapper[19803]: I0313 01:34:42.504105 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hr2h\" (UniqueName: \"kubernetes.io/projected/2699d1bb-8aa6-4f12-b578-93e566b6340d-kube-api-access-6hr2h\") pod \"cert-manager-545d4d4674-56pfb\" (UID: \"2699d1bb-8aa6-4f12-b578-93e566b6340d\") " pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:42.607348 master-0 kubenswrapper[19803]: I0313 01:34:42.607253 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2699d1bb-8aa6-4f12-b578-93e566b6340d-bound-sa-token\") pod \"cert-manager-545d4d4674-56pfb\" (UID: \"2699d1bb-8aa6-4f12-b578-93e566b6340d\") " pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:42.607713 master-0 kubenswrapper[19803]: I0313 01:34:42.607648 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6hr2h\" (UniqueName: \"kubernetes.io/projected/2699d1bb-8aa6-4f12-b578-93e566b6340d-kube-api-access-6hr2h\") pod \"cert-manager-545d4d4674-56pfb\" (UID: \"2699d1bb-8aa6-4f12-b578-93e566b6340d\") " pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:42.714620 master-0 kubenswrapper[19803]: I0313 01:34:42.705424 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6bm25\" (UniqueName: \"kubernetes.io/projected/e79a1bba-9fe7-4f9f-ad48-bb3910e54bff-kube-api-access-6bm25\") pod \"obo-prometheus-operator-68bc856cb9-pqxsw\" (UID: \"e79a1bba-9fe7-4f9f-ad48-bb3910e54bff\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" Mar 13 01:34:42.719403 master-0 kubenswrapper[19803]: I0313 01:34:42.719336 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-56pfb"] Mar 13 01:34:42.732542 master-0 kubenswrapper[19803]: I0313 01:34:42.725396 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2699d1bb-8aa6-4f12-b578-93e566b6340d-bound-sa-token\") pod \"cert-manager-545d4d4674-56pfb\" (UID: \"2699d1bb-8aa6-4f12-b578-93e566b6340d\") " pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:42.789343 master-0 kubenswrapper[19803]: I0313 01:34:42.788451 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hr2h\" (UniqueName: \"kubernetes.io/projected/2699d1bb-8aa6-4f12-b578-93e566b6340d-kube-api-access-6hr2h\") pod \"cert-manager-545d4d4674-56pfb\" (UID: \"2699d1bb-8aa6-4f12-b578-93e566b6340d\") " pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:42.807078 master-0 kubenswrapper[19803]: I0313 01:34:42.807035 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" Mar 13 01:34:42.844936 master-0 kubenswrapper[19803]: I0313 01:34:42.844884 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fcc84"] Mar 13 01:34:42.853770 master-0 kubenswrapper[19803]: I0313 01:34:42.853721 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:42.859761 master-0 kubenswrapper[19803]: I0313 01:34:42.858186 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Mar 13 01:34:42.876087 master-0 kubenswrapper[19803]: I0313 01:34:42.876023 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fcc84"] Mar 13 01:34:42.924652 master-0 kubenswrapper[19803]: I0313 01:34:42.912601 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mctmb\" (UniqueName: \"kubernetes.io/projected/c3976e1d-6751-403b-b831-967f80ef904d-kube-api-access-mctmb\") pod \"observability-operator-59bdc8b94-fcc84\" (UID: \"c3976e1d-6751-403b-b831-967f80ef904d\") " pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:42.925163 master-0 kubenswrapper[19803]: I0313 01:34:42.925116 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3976e1d-6751-403b-b831-967f80ef904d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fcc84\" (UID: \"c3976e1d-6751-403b-b831-967f80ef904d\") " pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:43.029011 master-0 kubenswrapper[19803]: I0313 01:34:43.028080 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mctmb\" (UniqueName: \"kubernetes.io/projected/c3976e1d-6751-403b-b831-967f80ef904d-kube-api-access-mctmb\") pod \"observability-operator-59bdc8b94-fcc84\" (UID: \"c3976e1d-6751-403b-b831-967f80ef904d\") " pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:43.029011 master-0 kubenswrapper[19803]: I0313 01:34:43.028194 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3976e1d-6751-403b-b831-967f80ef904d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fcc84\" (UID: \"c3976e1d-6751-403b-b831-967f80ef904d\") " pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:43.041441 master-0 kubenswrapper[19803]: I0313 01:34:43.037086 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/c3976e1d-6751-403b-b831-967f80ef904d-observability-operator-tls\") pod \"observability-operator-59bdc8b94-fcc84\" (UID: \"c3976e1d-6751-403b-b831-967f80ef904d\") " pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:43.041441 master-0 kubenswrapper[19803]: I0313 01:34:43.037700 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-545d4d4674-56pfb" Mar 13 01:34:43.043553 master-0 kubenswrapper[19803]: I0313 01:34:43.043016 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pxl6w"] Mar 13 01:34:43.053716 master-0 kubenswrapper[19803]: I0313 01:34:43.053649 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:43.064987 master-0 kubenswrapper[19803]: I0313 01:34:43.063926 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pxl6w"] Mar 13 01:34:43.074456 master-0 kubenswrapper[19803]: I0313 01:34:43.074408 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mctmb\" (UniqueName: \"kubernetes.io/projected/c3976e1d-6751-403b-b831-967f80ef904d-kube-api-access-mctmb\") pod \"observability-operator-59bdc8b94-fcc84\" (UID: \"c3976e1d-6751-403b-b831-967f80ef904d\") " pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:43.132156 master-0 kubenswrapper[19803]: I0313 01:34:43.130669 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r9ql\" (UniqueName: \"kubernetes.io/projected/fd1e670f-7667-46fc-8213-340c0479c901-kube-api-access-4r9ql\") pod \"perses-operator-5bf474d74f-pxl6w\" (UID: \"fd1e670f-7667-46fc-8213-340c0479c901\") " pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:43.132156 master-0 kubenswrapper[19803]: I0313 01:34:43.130764 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd1e670f-7667-46fc-8213-340c0479c901-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pxl6w\" (UID: \"fd1e670f-7667-46fc-8213-340c0479c901\") " pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:43.193884 master-0 kubenswrapper[19803]: I0313 01:34:43.193705 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:43.232434 master-0 kubenswrapper[19803]: I0313 01:34:43.232215 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r9ql\" (UniqueName: \"kubernetes.io/projected/fd1e670f-7667-46fc-8213-340c0479c901-kube-api-access-4r9ql\") pod \"perses-operator-5bf474d74f-pxl6w\" (UID: \"fd1e670f-7667-46fc-8213-340c0479c901\") " pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:43.232434 master-0 kubenswrapper[19803]: I0313 01:34:43.232305 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd1e670f-7667-46fc-8213-340c0479c901-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pxl6w\" (UID: \"fd1e670f-7667-46fc-8213-340c0479c901\") " pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:43.238307 master-0 kubenswrapper[19803]: I0313 01:34:43.238142 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/fd1e670f-7667-46fc-8213-340c0479c901-openshift-service-ca\") pod \"perses-operator-5bf474d74f-pxl6w\" (UID: \"fd1e670f-7667-46fc-8213-340c0479c901\") " pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:43.269848 master-0 kubenswrapper[19803]: I0313 01:34:43.269748 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r9ql\" (UniqueName: \"kubernetes.io/projected/fd1e670f-7667-46fc-8213-340c0479c901-kube-api-access-4r9ql\") pod \"perses-operator-5bf474d74f-pxl6w\" (UID: \"fd1e670f-7667-46fc-8213-340c0479c901\") " pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:43.429609 master-0 kubenswrapper[19803]: I0313 01:34:43.428154 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:44.523328 master-0 kubenswrapper[19803]: I0313 01:34:44.523250 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w"] Mar 13 01:34:44.524029 master-0 kubenswrapper[19803]: W0313 01:34:44.523641 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod04810dc8_d0d3_4b51_961d_a994763bae58.slice/crio-656d02e9a406193c5d1602f67e24edde8ecdba6df6704880599de25ce0c1ca1f WatchSource:0}: Error finding container 656d02e9a406193c5d1602f67e24edde8ecdba6df6704880599de25ce0c1ca1f: Status 404 returned error can't find the container with id 656d02e9a406193c5d1602f67e24edde8ecdba6df6704880599de25ce0c1ca1f Mar 13 01:34:44.567968 master-0 kubenswrapper[19803]: I0313 01:34:44.567848 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-fcc84"] Mar 13 01:34:44.707709 master-0 kubenswrapper[19803]: I0313 01:34:44.707639 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw"] Mar 13 01:34:44.731240 master-0 kubenswrapper[19803]: I0313 01:34:44.725273 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m"] Mar 13 01:34:44.743656 master-0 kubenswrapper[19803]: W0313 01:34:44.742844 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5db336c1_1122_4d84_82d3_84594c981aa8.slice/crio-106d5ea237f90726d349d26d5cfd31e3c0e52c982e90c65adad6989781ec1ea2 WatchSource:0}: Error finding container 106d5ea237f90726d349d26d5cfd31e3c0e52c982e90c65adad6989781ec1ea2: Status 404 returned error can't find the container with id 106d5ea237f90726d349d26d5cfd31e3c0e52c982e90c65adad6989781ec1ea2 Mar 13 01:34:44.828635 master-0 kubenswrapper[19803]: I0313 01:34:44.825405 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-545d4d4674-56pfb"] Mar 13 01:34:44.829584 master-0 kubenswrapper[19803]: W0313 01:34:44.829541 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2699d1bb_8aa6_4f12_b578_93e566b6340d.slice/crio-827db35bfa79344bfbfedf21fa92c658fa93b11c7a3e12ce4f0a66573f809c5d WatchSource:0}: Error finding container 827db35bfa79344bfbfedf21fa92c658fa93b11c7a3e12ce4f0a66573f809c5d: Status 404 returned error can't find the container with id 827db35bfa79344bfbfedf21fa92c658fa93b11c7a3e12ce4f0a66573f809c5d Mar 13 01:34:44.864927 master-0 kubenswrapper[19803]: I0313 01:34:44.864846 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fcc84" event={"ID":"c3976e1d-6751-403b-b831-967f80ef904d","Type":"ContainerStarted","Data":"dab039c719282087caf46896738a08d18bd4006b7450c6a262d937cd8cc8e9f5"} Mar 13 01:34:44.873440 master-0 kubenswrapper[19803]: I0313 01:34:44.873351 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" event={"ID":"5db336c1-1122-4d84-82d3-84594c981aa8","Type":"ContainerStarted","Data":"106d5ea237f90726d349d26d5cfd31e3c0e52c982e90c65adad6989781ec1ea2"} Mar 13 01:34:44.882969 master-0 kubenswrapper[19803]: I0313 01:34:44.876207 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" event={"ID":"e79a1bba-9fe7-4f9f-ad48-bb3910e54bff","Type":"ContainerStarted","Data":"c0947f4d741f49c4c90c1af8fd05a68d5230d8e90183828384cbcb3054b90ac0"} Mar 13 01:34:44.882969 master-0 kubenswrapper[19803]: I0313 01:34:44.879932 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" event={"ID":"974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6","Type":"ContainerStarted","Data":"568132baef40496ed2939d8a00dea5abef5e47a8a560ff09640166289473eb5f"} Mar 13 01:34:44.882969 master-0 kubenswrapper[19803]: I0313 01:34:44.879974 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:34:44.885794 master-0 kubenswrapper[19803]: I0313 01:34:44.884056 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" event={"ID":"36c14515-2f07-46ad-a5cd-1e81ccb8506e","Type":"ContainerStarted","Data":"913f8adbd9c75897dc26fb1435a589244f830584c72624678be5a269f7935450"} Mar 13 01:34:44.885794 master-0 kubenswrapper[19803]: I0313 01:34:44.884859 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:34:44.888188 master-0 kubenswrapper[19803]: I0313 01:34:44.888127 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" event={"ID":"04810dc8-d0d3-4b51-961d-a994763bae58","Type":"ContainerStarted","Data":"656d02e9a406193c5d1602f67e24edde8ecdba6df6704880599de25ce0c1ca1f"} Mar 13 01:34:44.889732 master-0 kubenswrapper[19803]: I0313 01:34:44.889677 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-56pfb" event={"ID":"2699d1bb-8aa6-4f12-b578-93e566b6340d","Type":"ContainerStarted","Data":"827db35bfa79344bfbfedf21fa92c658fa93b11c7a3e12ce4f0a66573f809c5d"} Mar 13 01:34:44.892521 master-0 kubenswrapper[19803]: W0313 01:34:44.892459 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfd1e670f_7667_46fc_8213_340c0479c901.slice/crio-e3f239f3e57d04b45fb174d4420b3c7d168dfb1fea51b561468ad8b29fadbf59 WatchSource:0}: Error finding container e3f239f3e57d04b45fb174d4420b3c7d168dfb1fea51b561468ad8b29fadbf59: Status 404 returned error can't find the container with id e3f239f3e57d04b45fb174d4420b3c7d168dfb1fea51b561468ad8b29fadbf59 Mar 13 01:34:44.992928 master-0 kubenswrapper[19803]: I0313 01:34:44.992406 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-pxl6w"] Mar 13 01:34:44.994754 master-0 kubenswrapper[19803]: I0313 01:34:44.994647 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" podStartSLOduration=2.360412402 podStartE2EDuration="12.994622054s" podCreationTimestamp="2026-03-13 01:34:32 +0000 UTC" firstStartedPulling="2026-03-13 01:34:33.340724226 +0000 UTC m=+1021.305871905" lastFinishedPulling="2026-03-13 01:34:43.974933878 +0000 UTC m=+1031.940081557" observedRunningTime="2026-03-13 01:34:44.908421764 +0000 UTC m=+1032.873569443" watchObservedRunningTime="2026-03-13 01:34:44.994622054 +0000 UTC m=+1032.959769733" Mar 13 01:34:45.016538 master-0 kubenswrapper[19803]: I0313 01:34:45.010406 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" podStartSLOduration=2.096008156 podStartE2EDuration="12.010378733s" podCreationTimestamp="2026-03-13 01:34:33 +0000 UTC" firstStartedPulling="2026-03-13 01:34:34.098488517 +0000 UTC m=+1022.063636196" lastFinishedPulling="2026-03-13 01:34:44.012859094 +0000 UTC m=+1031.978006773" observedRunningTime="2026-03-13 01:34:44.929208109 +0000 UTC m=+1032.894355788" watchObservedRunningTime="2026-03-13 01:34:45.010378733 +0000 UTC m=+1032.975526412" Mar 13 01:34:45.924752 master-0 kubenswrapper[19803]: I0313 01:34:45.924669 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" event={"ID":"fd1e670f-7667-46fc-8213-340c0479c901","Type":"ContainerStarted","Data":"e3f239f3e57d04b45fb174d4420b3c7d168dfb1fea51b561468ad8b29fadbf59"} Mar 13 01:34:45.931731 master-0 kubenswrapper[19803]: I0313 01:34:45.931636 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-545d4d4674-56pfb" event={"ID":"2699d1bb-8aa6-4f12-b578-93e566b6340d","Type":"ContainerStarted","Data":"09ebfee4311e9f65f0672279d799c7aa2ec05b922ba98b5b3be43470134e0bd7"} Mar 13 01:34:45.982542 master-0 kubenswrapper[19803]: I0313 01:34:45.976730 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-545d4d4674-56pfb" podStartSLOduration=4.976710289 podStartE2EDuration="4.976710289s" podCreationTimestamp="2026-03-13 01:34:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:34:45.957758276 +0000 UTC m=+1033.922905965" watchObservedRunningTime="2026-03-13 01:34:45.976710289 +0000 UTC m=+1033.941857968" Mar 13 01:34:56.026337 master-0 kubenswrapper[19803]: I0313 01:34:56.026268 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" event={"ID":"e79a1bba-9fe7-4f9f-ad48-bb3910e54bff","Type":"ContainerStarted","Data":"0ef3d530c74fd62998605cebe4363f69761fd015a64c299e789ad0c5b1c36544"} Mar 13 01:34:56.027817 master-0 kubenswrapper[19803]: I0313 01:34:56.027787 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" event={"ID":"04810dc8-d0d3-4b51-961d-a994763bae58","Type":"ContainerStarted","Data":"109281e6c0d968bacebb618c6973a6f014066dcd2a56241639d23689f35e03bf"} Mar 13 01:34:56.029409 master-0 kubenswrapper[19803]: I0313 01:34:56.029357 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-fcc84" event={"ID":"c3976e1d-6751-403b-b831-967f80ef904d","Type":"ContainerStarted","Data":"cea4b69a9811ad03d28a72d44a0cb89aa84c94d16a2574e221d114679afb34e6"} Mar 13 01:34:56.029791 master-0 kubenswrapper[19803]: I0313 01:34:56.029751 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:56.030648 master-0 kubenswrapper[19803]: I0313 01:34:56.030603 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" event={"ID":"fd1e670f-7667-46fc-8213-340c0479c901","Type":"ContainerStarted","Data":"377f8ba327e0e05fa26add301810762d2a28e27667e83716c24fe695e583f822"} Mar 13 01:34:56.031434 master-0 kubenswrapper[19803]: I0313 01:34:56.031408 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:34:56.033117 master-0 kubenswrapper[19803]: I0313 01:34:56.033075 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" event={"ID":"5db336c1-1122-4d84-82d3-84594c981aa8","Type":"ContainerStarted","Data":"68370950f44f1c1b3b4acc593b86d1dadb92512ecccd3226cc321612b09cbc34"} Mar 13 01:34:56.047719 master-0 kubenswrapper[19803]: I0313 01:34:56.047659 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-fcc84" Mar 13 01:34:56.083917 master-0 kubenswrapper[19803]: I0313 01:34:56.083748 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-pqxsw" podStartSLOduration=4.416117332 podStartE2EDuration="15.083725588s" podCreationTimestamp="2026-03-13 01:34:41 +0000 UTC" firstStartedPulling="2026-03-13 01:34:44.709022129 +0000 UTC m=+1032.674169808" lastFinishedPulling="2026-03-13 01:34:55.376630345 +0000 UTC m=+1043.341778064" observedRunningTime="2026-03-13 01:34:56.078149161 +0000 UTC m=+1044.043296840" watchObservedRunningTime="2026-03-13 01:34:56.083725588 +0000 UTC m=+1044.048873257" Mar 13 01:34:56.218160 master-0 kubenswrapper[19803]: I0313 01:34:56.218051 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-fcc84" podStartSLOduration=3.300905721 podStartE2EDuration="14.218029677s" podCreationTimestamp="2026-03-13 01:34:42 +0000 UTC" firstStartedPulling="2026-03-13 01:34:44.540439677 +0000 UTC m=+1032.505587356" lastFinishedPulling="2026-03-13 01:34:55.457563623 +0000 UTC m=+1043.422711312" observedRunningTime="2026-03-13 01:34:56.211031967 +0000 UTC m=+1044.176179656" watchObservedRunningTime="2026-03-13 01:34:56.218029677 +0000 UTC m=+1044.183177376" Mar 13 01:34:56.278629 master-0 kubenswrapper[19803]: I0313 01:34:56.277429 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" podStartSLOduration=3.792220065 podStartE2EDuration="14.277408153s" podCreationTimestamp="2026-03-13 01:34:42 +0000 UTC" firstStartedPulling="2026-03-13 01:34:44.895000247 +0000 UTC m=+1032.860147926" lastFinishedPulling="2026-03-13 01:34:55.380188295 +0000 UTC m=+1043.345336014" observedRunningTime="2026-03-13 01:34:56.276973323 +0000 UTC m=+1044.242121012" watchObservedRunningTime="2026-03-13 01:34:56.277408153 +0000 UTC m=+1044.242555832" Mar 13 01:34:56.333355 master-0 kubenswrapper[19803]: I0313 01:34:56.332408 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-4d62m" podStartSLOduration=4.704283634 podStartE2EDuration="15.332386018s" podCreationTimestamp="2026-03-13 01:34:41 +0000 UTC" firstStartedPulling="2026-03-13 01:34:44.746543355 +0000 UTC m=+1032.711691034" lastFinishedPulling="2026-03-13 01:34:55.374645729 +0000 UTC m=+1043.339793418" observedRunningTime="2026-03-13 01:34:56.31802045 +0000 UTC m=+1044.283168159" watchObservedRunningTime="2026-03-13 01:34:56.332386018 +0000 UTC m=+1044.297533697" Mar 13 01:34:56.378688 master-0 kubenswrapper[19803]: I0313 01:34:56.374710 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-85989999bf-km66w" podStartSLOduration=4.522956393 podStartE2EDuration="15.374688064s" podCreationTimestamp="2026-03-13 01:34:41 +0000 UTC" firstStartedPulling="2026-03-13 01:34:44.528620658 +0000 UTC m=+1032.493768337" lastFinishedPulling="2026-03-13 01:34:55.380352309 +0000 UTC m=+1043.345500008" observedRunningTime="2026-03-13 01:34:56.373648241 +0000 UTC m=+1044.338795940" watchObservedRunningTime="2026-03-13 01:34:56.374688064 +0000 UTC m=+1044.339835753" Mar 13 01:35:03.433921 master-0 kubenswrapper[19803]: I0313 01:35:03.433832 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-pxl6w" Mar 13 01:35:03.600660 master-0 kubenswrapper[19803]: I0313 01:35:03.600579 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-6c89d777d4-h7xf2" Mar 13 01:35:22.743540 master-0 kubenswrapper[19803]: I0313 01:35:22.743320 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6984bbdf9-qw42j" Mar 13 01:35:33.218702 master-0 kubenswrapper[19803]: I0313 01:35:33.218627 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk"] Mar 13 01:35:33.222217 master-0 kubenswrapper[19803]: I0313 01:35:33.219825 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.235556 master-0 kubenswrapper[19803]: I0313 01:35:33.228175 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Mar 13 01:35:33.235556 master-0 kubenswrapper[19803]: I0313 01:35:33.233777 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-dc7jx"] Mar 13 01:35:33.257640 master-0 kubenswrapper[19803]: I0313 01:35:33.256797 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk"] Mar 13 01:35:33.257640 master-0 kubenswrapper[19803]: I0313 01:35:33.256964 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.262662 master-0 kubenswrapper[19803]: I0313 01:35:33.262631 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Mar 13 01:35:33.264843 master-0 kubenswrapper[19803]: I0313 01:35:33.264797 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Mar 13 01:35:33.331856 master-0 kubenswrapper[19803]: I0313 01:35:33.331789 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6kvrb\" (UniqueName: \"kubernetes.io/projected/26566375-fda5-4fbb-8e37-4901c404589e-kube-api-access-6kvrb\") pod \"frr-k8s-webhook-server-bcc4b6f68-gz7lk\" (UID: \"26566375-fda5-4fbb-8e37-4901c404589e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.332067 master-0 kubenswrapper[19803]: I0313 01:35:33.331960 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26566375-fda5-4fbb-8e37-4901c404589e-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-gz7lk\" (UID: \"26566375-fda5-4fbb-8e37-4901c404589e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.337440 master-0 kubenswrapper[19803]: I0313 01:35:33.337071 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-7p9lv"] Mar 13 01:35:33.338816 master-0 kubenswrapper[19803]: I0313 01:35:33.338777 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.341880 master-0 kubenswrapper[19803]: I0313 01:35:33.341835 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Mar 13 01:35:33.342072 master-0 kubenswrapper[19803]: I0313 01:35:33.342047 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Mar 13 01:35:33.342599 master-0 kubenswrapper[19803]: I0313 01:35:33.342571 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Mar 13 01:35:33.357407 master-0 kubenswrapper[19803]: I0313 01:35:33.357344 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-7bb4cc7c98-lzxmk"] Mar 13 01:35:33.359961 master-0 kubenswrapper[19803]: I0313 01:35:33.359863 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.372504 master-0 kubenswrapper[19803]: I0313 01:35:33.372432 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Mar 13 01:35:33.382523 master-0 kubenswrapper[19803]: I0313 01:35:33.382463 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-lzxmk"] Mar 13 01:35:33.438428 master-0 kubenswrapper[19803]: I0313 01:35:33.438350 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.438428 master-0 kubenswrapper[19803]: I0313 01:35:33.438420 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-metrics\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.438428 master-0 kubenswrapper[19803]: I0313 01:35:33.438463 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57qcg\" (UniqueName: \"kubernetes.io/projected/a06d84dd-5485-4043-bd8d-332d3bb99fa3-kube-api-access-57qcg\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.438938 master-0 kubenswrapper[19803]: I0313 01:35:33.438481 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-metrics-certs\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.438938 master-0 kubenswrapper[19803]: I0313 01:35:33.438536 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26566375-fda5-4fbb-8e37-4901c404589e-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-gz7lk\" (UID: \"26566375-fda5-4fbb-8e37-4901c404589e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.438938 master-0 kubenswrapper[19803]: I0313 01:35:33.438665 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a06d84dd-5485-4043-bd8d-332d3bb99fa3-metallb-excludel2\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.438938 master-0 kubenswrapper[19803]: I0313 01:35:33.438740 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-sockets\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.438938 master-0 kubenswrapper[19803]: I0313 01:35:33.438761 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-conf\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.438938 master-0 kubenswrapper[19803]: I0313 01:35:33.438795 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6kvrb\" (UniqueName: \"kubernetes.io/projected/26566375-fda5-4fbb-8e37-4901c404589e-kube-api-access-6kvrb\") pod \"frr-k8s-webhook-server-bcc4b6f68-gz7lk\" (UID: \"26566375-fda5-4fbb-8e37-4901c404589e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.439205 master-0 kubenswrapper[19803]: I0313 01:35:33.439147 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5555aed3-8836-40c7-a55a-ff3708f816e5-metrics-certs\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.439390 master-0 kubenswrapper[19803]: I0313 01:35:33.439322 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cv7r\" (UniqueName: \"kubernetes.io/projected/5555aed3-8836-40c7-a55a-ff3708f816e5-kube-api-access-9cv7r\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.439390 master-0 kubenswrapper[19803]: I0313 01:35:33.439376 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-reloader\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.439599 master-0 kubenswrapper[19803]: I0313 01:35:33.439453 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-startup\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.450629 master-0 kubenswrapper[19803]: I0313 01:35:33.448127 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/26566375-fda5-4fbb-8e37-4901c404589e-cert\") pod \"frr-k8s-webhook-server-bcc4b6f68-gz7lk\" (UID: \"26566375-fda5-4fbb-8e37-4901c404589e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.470602 master-0 kubenswrapper[19803]: I0313 01:35:33.470484 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6kvrb\" (UniqueName: \"kubernetes.io/projected/26566375-fda5-4fbb-8e37-4901c404589e-kube-api-access-6kvrb\") pod \"frr-k8s-webhook-server-bcc4b6f68-gz7lk\" (UID: \"26566375-fda5-4fbb-8e37-4901c404589e\") " pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.541234 master-0 kubenswrapper[19803]: I0313 01:35:33.541174 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57qcg\" (UniqueName: \"kubernetes.io/projected/a06d84dd-5485-4043-bd8d-332d3bb99fa3-kube-api-access-57qcg\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.541234 master-0 kubenswrapper[19803]: I0313 01:35:33.541236 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-metrics-certs\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.541532 master-0 kubenswrapper[19803]: I0313 01:35:33.541273 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a79b54b-b4c6-4a23-8818-6ee030e13899-metrics-certs\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.541851 master-0 kubenswrapper[19803]: I0313 01:35:33.541623 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp9gs\" (UniqueName: \"kubernetes.io/projected/5a79b54b-b4c6-4a23-8818-6ee030e13899-kube-api-access-bp9gs\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.541851 master-0 kubenswrapper[19803]: I0313 01:35:33.541706 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a06d84dd-5485-4043-bd8d-332d3bb99fa3-metallb-excludel2\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.541851 master-0 kubenswrapper[19803]: I0313 01:35:33.541744 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-sockets\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.541851 master-0 kubenswrapper[19803]: I0313 01:35:33.541791 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-conf\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542080 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5555aed3-8836-40c7-a55a-ff3708f816e5-metrics-certs\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542156 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-sockets\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542187 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-conf\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542188 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9cv7r\" (UniqueName: \"kubernetes.io/projected/5555aed3-8836-40c7-a55a-ff3708f816e5-kube-api-access-9cv7r\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542243 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-reloader\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542288 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-startup\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542341 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a79b54b-b4c6-4a23-8818-6ee030e13899-cert\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542366 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542394 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-metrics\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542619 master-0 kubenswrapper[19803]: I0313 01:35:33.542543 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/a06d84dd-5485-4043-bd8d-332d3bb99fa3-metallb-excludel2\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.542951 master-0 kubenswrapper[19803]: I0313 01:35:33.542690 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-metrics\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.542951 master-0 kubenswrapper[19803]: E0313 01:35:33.542752 19803 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 01:35:33.542951 master-0 kubenswrapper[19803]: E0313 01:35:33.542791 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist podName:a06d84dd-5485-4043-bd8d-332d3bb99fa3 nodeName:}" failed. No retries permitted until 2026-03-13 01:35:34.042776659 +0000 UTC m=+1082.007924338 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist") pod "speaker-7p9lv" (UID: "a06d84dd-5485-4043-bd8d-332d3bb99fa3") : secret "metallb-memberlist" not found Mar 13 01:35:33.543356 master-0 kubenswrapper[19803]: I0313 01:35:33.543324 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/5555aed3-8836-40c7-a55a-ff3708f816e5-reloader\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.544942 master-0 kubenswrapper[19803]: I0313 01:35:33.544914 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-metrics-certs\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.547141 master-0 kubenswrapper[19803]: I0313 01:35:33.547104 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/5555aed3-8836-40c7-a55a-ff3708f816e5-frr-startup\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.548524 master-0 kubenswrapper[19803]: I0313 01:35:33.547860 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5555aed3-8836-40c7-a55a-ff3708f816e5-metrics-certs\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.551065 master-0 kubenswrapper[19803]: I0313 01:35:33.551023 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:33.558103 master-0 kubenswrapper[19803]: I0313 01:35:33.558041 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57qcg\" (UniqueName: \"kubernetes.io/projected/a06d84dd-5485-4043-bd8d-332d3bb99fa3-kube-api-access-57qcg\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:33.578335 master-0 kubenswrapper[19803]: I0313 01:35:33.578286 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9cv7r\" (UniqueName: \"kubernetes.io/projected/5555aed3-8836-40c7-a55a-ff3708f816e5-kube-api-access-9cv7r\") pod \"frr-k8s-dc7jx\" (UID: \"5555aed3-8836-40c7-a55a-ff3708f816e5\") " pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.593691 master-0 kubenswrapper[19803]: I0313 01:35:33.593427 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:33.652923 master-0 kubenswrapper[19803]: I0313 01:35:33.651543 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a79b54b-b4c6-4a23-8818-6ee030e13899-cert\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.652923 master-0 kubenswrapper[19803]: I0313 01:35:33.651850 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a79b54b-b4c6-4a23-8818-6ee030e13899-metrics-certs\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.652923 master-0 kubenswrapper[19803]: I0313 01:35:33.651879 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp9gs\" (UniqueName: \"kubernetes.io/projected/5a79b54b-b4c6-4a23-8818-6ee030e13899-kube-api-access-bp9gs\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.653176 master-0 kubenswrapper[19803]: I0313 01:35:33.653033 19803 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Mar 13 01:35:33.656392 master-0 kubenswrapper[19803]: I0313 01:35:33.656351 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5a79b54b-b4c6-4a23-8818-6ee030e13899-metrics-certs\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.666939 master-0 kubenswrapper[19803]: I0313 01:35:33.666880 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/5a79b54b-b4c6-4a23-8818-6ee030e13899-cert\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.671588 master-0 kubenswrapper[19803]: I0313 01:35:33.669782 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp9gs\" (UniqueName: \"kubernetes.io/projected/5a79b54b-b4c6-4a23-8818-6ee030e13899-kube-api-access-bp9gs\") pod \"controller-7bb4cc7c98-lzxmk\" (UID: \"5a79b54b-b4c6-4a23-8818-6ee030e13899\") " pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:33.691061 master-0 kubenswrapper[19803]: I0313 01:35:33.690963 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:34.013707 master-0 kubenswrapper[19803]: W0313 01:35:34.013653 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26566375_fda5_4fbb_8e37_4901c404589e.slice/crio-b02603ab8cfa1384894e0dc0706f47164016698b7a26bc4aadc0a13f4cd1595f WatchSource:0}: Error finding container b02603ab8cfa1384894e0dc0706f47164016698b7a26bc4aadc0a13f4cd1595f: Status 404 returned error can't find the container with id b02603ab8cfa1384894e0dc0706f47164016698b7a26bc4aadc0a13f4cd1595f Mar 13 01:35:34.013900 master-0 kubenswrapper[19803]: I0313 01:35:34.013726 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk"] Mar 13 01:35:34.064337 master-0 kubenswrapper[19803]: I0313 01:35:34.064274 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:34.064574 master-0 kubenswrapper[19803]: E0313 01:35:34.064468 19803 secret.go:189] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Mar 13 01:35:34.064574 master-0 kubenswrapper[19803]: E0313 01:35:34.064557 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist podName:a06d84dd-5485-4043-bd8d-332d3bb99fa3 nodeName:}" failed. No retries permitted until 2026-03-13 01:35:35.06453519 +0000 UTC m=+1083.029682869 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist") pod "speaker-7p9lv" (UID: "a06d84dd-5485-4043-bd8d-332d3bb99fa3") : secret "metallb-memberlist" not found Mar 13 01:35:34.111187 master-0 kubenswrapper[19803]: I0313 01:35:34.111137 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-7bb4cc7c98-lzxmk"] Mar 13 01:35:34.112317 master-0 kubenswrapper[19803]: W0313 01:35:34.112227 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a79b54b_b4c6_4a23_8818_6ee030e13899.slice/crio-9031ec1a1515bbbf9055188e3969d7e6445649b4848b70fa9d24d2e995dd1d23 WatchSource:0}: Error finding container 9031ec1a1515bbbf9055188e3969d7e6445649b4848b70fa9d24d2e995dd1d23: Status 404 returned error can't find the container with id 9031ec1a1515bbbf9055188e3969d7e6445649b4848b70fa9d24d2e995dd1d23 Mar 13 01:35:34.443638 master-0 kubenswrapper[19803]: I0313 01:35:34.443550 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" event={"ID":"26566375-fda5-4fbb-8e37-4901c404589e","Type":"ContainerStarted","Data":"b02603ab8cfa1384894e0dc0706f47164016698b7a26bc4aadc0a13f4cd1595f"} Mar 13 01:35:34.446162 master-0 kubenswrapper[19803]: I0313 01:35:34.446112 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-lzxmk" event={"ID":"5a79b54b-b4c6-4a23-8818-6ee030e13899","Type":"ContainerStarted","Data":"c846056ba6ad773620ca0f7d90b6a25dcfb0d2a5b5a5cffc28ac88bdc2833afa"} Mar 13 01:35:34.446292 master-0 kubenswrapper[19803]: I0313 01:35:34.446176 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-lzxmk" event={"ID":"5a79b54b-b4c6-4a23-8818-6ee030e13899","Type":"ContainerStarted","Data":"9031ec1a1515bbbf9055188e3969d7e6445649b4848b70fa9d24d2e995dd1d23"} Mar 13 01:35:34.447937 master-0 kubenswrapper[19803]: I0313 01:35:34.447885 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerStarted","Data":"983191921a47ebe887efab606772d1f90a8163c19ac2eddd6640b39914abb9cf"} Mar 13 01:35:35.096997 master-0 kubenswrapper[19803]: I0313 01:35:35.096924 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:35.100019 master-0 kubenswrapper[19803]: I0313 01:35:35.099979 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/a06d84dd-5485-4043-bd8d-332d3bb99fa3-memberlist\") pod \"speaker-7p9lv\" (UID: \"a06d84dd-5485-4043-bd8d-332d3bb99fa3\") " pod="metallb-system/speaker-7p9lv" Mar 13 01:35:35.173338 master-0 kubenswrapper[19803]: I0313 01:35:35.173271 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-7p9lv" Mar 13 01:35:35.359705 master-0 kubenswrapper[19803]: I0313 01:35:35.358586 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9"] Mar 13 01:35:35.361118 master-0 kubenswrapper[19803]: I0313 01:35:35.361077 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" Mar 13 01:35:35.377712 master-0 kubenswrapper[19803]: I0313 01:35:35.377394 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs"] Mar 13 01:35:35.378931 master-0 kubenswrapper[19803]: I0313 01:35:35.378905 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.387821 master-0 kubenswrapper[19803]: I0313 01:35:35.387767 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Mar 13 01:35:35.408079 master-0 kubenswrapper[19803]: I0313 01:35:35.407196 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9"] Mar 13 01:35:35.430716 master-0 kubenswrapper[19803]: I0313 01:35:35.430668 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-xc24l"] Mar 13 01:35:35.432076 master-0 kubenswrapper[19803]: I0313 01:35:35.432041 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.484164 master-0 kubenswrapper[19803]: I0313 01:35:35.481452 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs"] Mar 13 01:35:35.499523 master-0 kubenswrapper[19803]: I0313 01:35:35.499442 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7p9lv" event={"ID":"a06d84dd-5485-4043-bd8d-332d3bb99fa3","Type":"ContainerStarted","Data":"e067b690f8e05839bf3a61494b84be30c5893c00164650edd8ba327ca46fbf63"} Mar 13 01:35:35.504264 master-0 kubenswrapper[19803]: I0313 01:35:35.504193 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b64722e0-860a-4f39-bca0-51cae9911bc0-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-8f7rs\" (UID: \"b64722e0-860a-4f39-bca0-51cae9911bc0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.504369 master-0 kubenswrapper[19803]: I0313 01:35:35.504330 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5mjmk\" (UniqueName: \"kubernetes.io/projected/1f9d5bff-035e-4b19-946a-c8c49fd43ebb-kube-api-access-5mjmk\") pod \"nmstate-metrics-9b8c8685d-tvqv9\" (UID: \"1f9d5bff-035e-4b19-946a-c8c49fd43ebb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" Mar 13 01:35:35.504606 master-0 kubenswrapper[19803]: I0313 01:35:35.504435 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7ttb\" (UniqueName: \"kubernetes.io/projected/b64722e0-860a-4f39-bca0-51cae9911bc0-kube-api-access-n7ttb\") pod \"nmstate-webhook-5f558f5558-8f7rs\" (UID: \"b64722e0-860a-4f39-bca0-51cae9911bc0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.571379 master-0 kubenswrapper[19803]: I0313 01:35:35.571318 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n"] Mar 13 01:35:35.572946 master-0 kubenswrapper[19803]: I0313 01:35:35.572670 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.585565 master-0 kubenswrapper[19803]: I0313 01:35:35.582283 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Mar 13 01:35:35.585565 master-0 kubenswrapper[19803]: I0313 01:35:35.582694 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Mar 13 01:35:35.597601 master-0 kubenswrapper[19803]: I0313 01:35:35.597556 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n"] Mar 13 01:35:35.608866 master-0 kubenswrapper[19803]: I0313 01:35:35.608801 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtbnp\" (UniqueName: \"kubernetes.io/projected/1da4232b-d161-4e9d-9e52-0c4663080dfd-kube-api-access-dtbnp\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.609172 master-0 kubenswrapper[19803]: I0313 01:35:35.609152 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7ttb\" (UniqueName: \"kubernetes.io/projected/b64722e0-860a-4f39-bca0-51cae9911bc0-kube-api-access-n7ttb\") pod \"nmstate-webhook-5f558f5558-8f7rs\" (UID: \"b64722e0-860a-4f39-bca0-51cae9911bc0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.609271 master-0 kubenswrapper[19803]: I0313 01:35:35.609256 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-nmstate-lock\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.609396 master-0 kubenswrapper[19803]: I0313 01:35:35.609383 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-dbus-socket\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.609492 master-0 kubenswrapper[19803]: I0313 01:35:35.609479 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-ovs-socket\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.609606 master-0 kubenswrapper[19803]: I0313 01:35:35.609587 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b64722e0-860a-4f39-bca0-51cae9911bc0-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-8f7rs\" (UID: \"b64722e0-860a-4f39-bca0-51cae9911bc0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.609768 master-0 kubenswrapper[19803]: I0313 01:35:35.609708 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5mjmk\" (UniqueName: \"kubernetes.io/projected/1f9d5bff-035e-4b19-946a-c8c49fd43ebb-kube-api-access-5mjmk\") pod \"nmstate-metrics-9b8c8685d-tvqv9\" (UID: \"1f9d5bff-035e-4b19-946a-c8c49fd43ebb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" Mar 13 01:35:35.616941 master-0 kubenswrapper[19803]: I0313 01:35:35.616865 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/b64722e0-860a-4f39-bca0-51cae9911bc0-tls-key-pair\") pod \"nmstate-webhook-5f558f5558-8f7rs\" (UID: \"b64722e0-860a-4f39-bca0-51cae9911bc0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.631820 master-0 kubenswrapper[19803]: I0313 01:35:35.631790 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5mjmk\" (UniqueName: \"kubernetes.io/projected/1f9d5bff-035e-4b19-946a-c8c49fd43ebb-kube-api-access-5mjmk\") pod \"nmstate-metrics-9b8c8685d-tvqv9\" (UID: \"1f9d5bff-035e-4b19-946a-c8c49fd43ebb\") " pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" Mar 13 01:35:35.634478 master-0 kubenswrapper[19803]: I0313 01:35:35.634411 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7ttb\" (UniqueName: \"kubernetes.io/projected/b64722e0-860a-4f39-bca0-51cae9911bc0-kube-api-access-n7ttb\") pod \"nmstate-webhook-5f558f5558-8f7rs\" (UID: \"b64722e0-860a-4f39-bca0-51cae9911bc0\") " pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.712468 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-dbus-socket\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.712740 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-dbus-socket\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714093 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-ovs-socket\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714183 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ws826\" (UniqueName: \"kubernetes.io/projected/72706d51-8596-4a52-88bd-d994a8baad33-kube-api-access-ws826\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714450 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtbnp\" (UniqueName: \"kubernetes.io/projected/1da4232b-d161-4e9d-9e52-0c4663080dfd-kube-api-access-dtbnp\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714534 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/72706d51-8596-4a52-88bd-d994a8baad33-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714570 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-nmstate-lock\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714662 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/72706d51-8596-4a52-88bd-d994a8baad33-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714843 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-nmstate-lock\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.715034 master-0 kubenswrapper[19803]: I0313 01:35:35.714882 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/1da4232b-d161-4e9d-9e52-0c4663080dfd-ovs-socket\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.737678 master-0 kubenswrapper[19803]: I0313 01:35:35.737596 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" Mar 13 01:35:35.764221 master-0 kubenswrapper[19803]: I0313 01:35:35.764168 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtbnp\" (UniqueName: \"kubernetes.io/projected/1da4232b-d161-4e9d-9e52-0c4663080dfd-kube-api-access-dtbnp\") pod \"nmstate-handler-xc24l\" (UID: \"1da4232b-d161-4e9d-9e52-0c4663080dfd\") " pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.780577 master-0 kubenswrapper[19803]: I0313 01:35:35.780527 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-7887658d99-sfwrp"] Mar 13 01:35:35.781708 master-0 kubenswrapper[19803]: I0313 01:35:35.781690 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:35.804106 master-0 kubenswrapper[19803]: I0313 01:35:35.804043 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:35.811802 master-0 kubenswrapper[19803]: I0313 01:35:35.811750 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7887658d99-sfwrp"] Mar 13 01:35:35.819549 master-0 kubenswrapper[19803]: I0313 01:35:35.816944 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ws826\" (UniqueName: \"kubernetes.io/projected/72706d51-8596-4a52-88bd-d994a8baad33-kube-api-access-ws826\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.819549 master-0 kubenswrapper[19803]: I0313 01:35:35.817042 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/72706d51-8596-4a52-88bd-d994a8baad33-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.819549 master-0 kubenswrapper[19803]: I0313 01:35:35.817092 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/72706d51-8596-4a52-88bd-d994a8baad33-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.819549 master-0 kubenswrapper[19803]: E0313 01:35:35.818067 19803 secret.go:189] Couldn't get secret openshift-nmstate/plugin-serving-cert: secret "plugin-serving-cert" not found Mar 13 01:35:35.819549 master-0 kubenswrapper[19803]: E0313 01:35:35.818112 19803 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/72706d51-8596-4a52-88bd-d994a8baad33-plugin-serving-cert podName:72706d51-8596-4a52-88bd-d994a8baad33 nodeName:}" failed. No retries permitted until 2026-03-13 01:35:36.31809756 +0000 UTC m=+1084.283245239 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "plugin-serving-cert" (UniqueName: "kubernetes.io/secret/72706d51-8596-4a52-88bd-d994a8baad33-plugin-serving-cert") pod "nmstate-console-plugin-86f58fcf4-xjq7n" (UID: "72706d51-8596-4a52-88bd-d994a8baad33") : secret "plugin-serving-cert" not found Mar 13 01:35:35.829890 master-0 kubenswrapper[19803]: I0313 01:35:35.824472 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/72706d51-8596-4a52-88bd-d994a8baad33-nginx-conf\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.829890 master-0 kubenswrapper[19803]: I0313 01:35:35.824903 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:35.863626 master-0 kubenswrapper[19803]: I0313 01:35:35.860567 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ws826\" (UniqueName: \"kubernetes.io/projected/72706d51-8596-4a52-88bd-d994a8baad33-kube-api-access-ws826\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:35.921085 master-0 kubenswrapper[19803]: I0313 01:35:35.920076 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-service-ca\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:35.921085 master-0 kubenswrapper[19803]: I0313 01:35:35.920200 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-oauth-config\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:35.921085 master-0 kubenswrapper[19803]: I0313 01:35:35.920223 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-serving-cert\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:35.921085 master-0 kubenswrapper[19803]: I0313 01:35:35.920243 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-oauth-serving-cert\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:35.921085 master-0 kubenswrapper[19803]: I0313 01:35:35.920291 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-config\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:35.921085 master-0 kubenswrapper[19803]: I0313 01:35:35.920328 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-trusted-ca-bundle\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:35.921085 master-0 kubenswrapper[19803]: I0313 01:35:35.920366 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nn7z8\" (UniqueName: \"kubernetes.io/projected/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-kube-api-access-nn7z8\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.029317 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nn7z8\" (UniqueName: \"kubernetes.io/projected/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-kube-api-access-nn7z8\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.029412 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-service-ca\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.029443 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-oauth-config\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.029464 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-serving-cert\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.029485 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-oauth-serving-cert\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.029553 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-config\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.029578 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-trusted-ca-bundle\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.030888 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-trusted-ca-bundle\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.032417 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-oauth-serving-cert\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.032730 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-service-ca\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.033539 master-0 kubenswrapper[19803]: I0313 01:35:36.033031 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-config\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.040828 master-0 kubenswrapper[19803]: I0313 01:35:36.037155 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-oauth-config\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.043466 master-0 kubenswrapper[19803]: I0313 01:35:36.043405 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-console-serving-cert\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.051159 master-0 kubenswrapper[19803]: I0313 01:35:36.051126 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nn7z8\" (UniqueName: \"kubernetes.io/projected/6a8a3b62-6dfb-432e-80a6-7bb0c7f47976-kube-api-access-nn7z8\") pod \"console-7887658d99-sfwrp\" (UID: \"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976\") " pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.176922 master-0 kubenswrapper[19803]: I0313 01:35:36.175691 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:36.336403 master-0 kubenswrapper[19803]: I0313 01:35:36.336348 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/72706d51-8596-4a52-88bd-d994a8baad33-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:36.336546 master-0 kubenswrapper[19803]: I0313 01:35:36.336354 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9"] Mar 13 01:35:36.341601 master-0 kubenswrapper[19803]: I0313 01:35:36.341564 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/72706d51-8596-4a52-88bd-d994a8baad33-plugin-serving-cert\") pod \"nmstate-console-plugin-86f58fcf4-xjq7n\" (UID: \"72706d51-8596-4a52-88bd-d994a8baad33\") " pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:36.427366 master-0 kubenswrapper[19803]: I0313 01:35:36.427258 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs"] Mar 13 01:35:36.514708 master-0 kubenswrapper[19803]: I0313 01:35:36.514642 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xc24l" event={"ID":"1da4232b-d161-4e9d-9e52-0c4663080dfd","Type":"ContainerStarted","Data":"5dea20ebbbd209c1562d2b2091c4972a6181f52a366cc5afff105db18d296a53"} Mar 13 01:35:36.519019 master-0 kubenswrapper[19803]: I0313 01:35:36.518974 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" event={"ID":"b64722e0-860a-4f39-bca0-51cae9911bc0","Type":"ContainerStarted","Data":"0e67e593026fb17c98ba7f9fa5849811708568ac9bd9f38c9052be0dc90ff512"} Mar 13 01:35:36.520935 master-0 kubenswrapper[19803]: I0313 01:35:36.520889 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" event={"ID":"1f9d5bff-035e-4b19-946a-c8c49fd43ebb","Type":"ContainerStarted","Data":"54b09d1ae2ef63ea83c989c52f32d98625afe99cdd5ff513056b4bec42e800b9"} Mar 13 01:35:36.527354 master-0 kubenswrapper[19803]: I0313 01:35:36.527323 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7p9lv" event={"ID":"a06d84dd-5485-4043-bd8d-332d3bb99fa3","Type":"ContainerStarted","Data":"fb99cf974cf1ea109eb13f9c412253c0352ab8ec5987d86b229e4ce184dab96b"} Mar 13 01:35:36.584950 master-0 kubenswrapper[19803]: I0313 01:35:36.584859 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" Mar 13 01:35:36.692622 master-0 kubenswrapper[19803]: I0313 01:35:36.688390 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-7887658d99-sfwrp"] Mar 13 01:35:36.709351 master-0 kubenswrapper[19803]: W0313 01:35:36.709048 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a8a3b62_6dfb_432e_80a6_7bb0c7f47976.slice/crio-8393df04b3622ede8af1ffce74012ed8be4541eedf3b9a925875af6a29ca6571 WatchSource:0}: Error finding container 8393df04b3622ede8af1ffce74012ed8be4541eedf3b9a925875af6a29ca6571: Status 404 returned error can't find the container with id 8393df04b3622ede8af1ffce74012ed8be4541eedf3b9a925875af6a29ca6571 Mar 13 01:35:37.066857 master-0 kubenswrapper[19803]: I0313 01:35:37.066808 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n"] Mar 13 01:35:37.556543 master-0 kubenswrapper[19803]: I0313 01:35:37.556422 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7887658d99-sfwrp" event={"ID":"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976","Type":"ContainerStarted","Data":"feaa9f88c9d848aa090dc25bc05068fe28dcf2a8eb1cde07c82ad646c29b917c"} Mar 13 01:35:37.557308 master-0 kubenswrapper[19803]: I0313 01:35:37.556554 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-7887658d99-sfwrp" event={"ID":"6a8a3b62-6dfb-432e-80a6-7bb0c7f47976","Type":"ContainerStarted","Data":"8393df04b3622ede8af1ffce74012ed8be4541eedf3b9a925875af6a29ca6571"} Mar 13 01:35:37.780699 master-0 kubenswrapper[19803]: I0313 01:35:37.780612 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-7887658d99-sfwrp" podStartSLOduration=2.780595304 podStartE2EDuration="2.780595304s" podCreationTimestamp="2026-03-13 01:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:35:37.769088871 +0000 UTC m=+1085.734236580" watchObservedRunningTime="2026-03-13 01:35:37.780595304 +0000 UTC m=+1085.745742983" Mar 13 01:35:37.896775 master-0 kubenswrapper[19803]: W0313 01:35:37.896718 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod72706d51_8596_4a52_88bd_d994a8baad33.slice/crio-be15c729ed015320cd76e5ffc22bfbc95f3b54f406d912aa13567c029a1981c6 WatchSource:0}: Error finding container be15c729ed015320cd76e5ffc22bfbc95f3b54f406d912aa13567c029a1981c6: Status 404 returned error can't find the container with id be15c729ed015320cd76e5ffc22bfbc95f3b54f406d912aa13567c029a1981c6 Mar 13 01:35:38.571131 master-0 kubenswrapper[19803]: I0313 01:35:38.571033 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" event={"ID":"72706d51-8596-4a52-88bd-d994a8baad33","Type":"ContainerStarted","Data":"be15c729ed015320cd76e5ffc22bfbc95f3b54f406d912aa13567c029a1981c6"} Mar 13 01:35:38.577305 master-0 kubenswrapper[19803]: I0313 01:35:38.577259 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-7p9lv" event={"ID":"a06d84dd-5485-4043-bd8d-332d3bb99fa3","Type":"ContainerStarted","Data":"cd608bacd3d27bf0788867955509ed36f6658c2c469e337843b4fc14c0c28fe3"} Mar 13 01:35:38.578354 master-0 kubenswrapper[19803]: I0313 01:35:38.578336 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-7p9lv" Mar 13 01:35:38.581687 master-0 kubenswrapper[19803]: I0313 01:35:38.581592 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-7bb4cc7c98-lzxmk" event={"ID":"5a79b54b-b4c6-4a23-8818-6ee030e13899","Type":"ContainerStarted","Data":"ecd86cff7d3264bd940d6f62374970b24c23541cc070de8f875d505bb5142b07"} Mar 13 01:35:38.581687 master-0 kubenswrapper[19803]: I0313 01:35:38.581664 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:38.607333 master-0 kubenswrapper[19803]: I0313 01:35:38.607260 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-7p9lv" podStartSLOduration=3.217509056 podStartE2EDuration="5.60724014s" podCreationTimestamp="2026-03-13 01:35:33 +0000 UTC" firstStartedPulling="2026-03-13 01:35:35.593621112 +0000 UTC m=+1083.558768791" lastFinishedPulling="2026-03-13 01:35:37.983352196 +0000 UTC m=+1085.948499875" observedRunningTime="2026-03-13 01:35:38.59855399 +0000 UTC m=+1086.563701679" watchObservedRunningTime="2026-03-13 01:35:38.60724014 +0000 UTC m=+1086.572387819" Mar 13 01:35:38.631130 master-0 kubenswrapper[19803]: I0313 01:35:38.631015 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-7bb4cc7c98-lzxmk" podStartSLOduration=1.950283164 podStartE2EDuration="5.630994821s" podCreationTimestamp="2026-03-13 01:35:33 +0000 UTC" firstStartedPulling="2026-03-13 01:35:34.289900648 +0000 UTC m=+1082.255048327" lastFinishedPulling="2026-03-13 01:35:37.970612305 +0000 UTC m=+1085.935759984" observedRunningTime="2026-03-13 01:35:38.627887031 +0000 UTC m=+1086.593034710" watchObservedRunningTime="2026-03-13 01:35:38.630994821 +0000 UTC m=+1086.596142490" Mar 13 01:35:44.649865 master-0 kubenswrapper[19803]: I0313 01:35:44.649770 19803 generic.go:334] "Generic (PLEG): container finished" podID="5555aed3-8836-40c7-a55a-ff3708f816e5" containerID="d77727ba2493fc97f4a597c858eb99abe362f128336617d63e2af75d195c9923" exitCode=0 Mar 13 01:35:44.650896 master-0 kubenswrapper[19803]: I0313 01:35:44.649900 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerDied","Data":"d77727ba2493fc97f4a597c858eb99abe362f128336617d63e2af75d195c9923"} Mar 13 01:35:44.652945 master-0 kubenswrapper[19803]: I0313 01:35:44.652241 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" event={"ID":"26566375-fda5-4fbb-8e37-4901c404589e","Type":"ContainerStarted","Data":"2b979dc0281c6052ff2b7f92dcef600e5f90816d0254acd6b3441441c820e47a"} Mar 13 01:35:44.652945 master-0 kubenswrapper[19803]: I0313 01:35:44.652381 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:35:44.654692 master-0 kubenswrapper[19803]: I0313 01:35:44.654660 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-xc24l" event={"ID":"1da4232b-d161-4e9d-9e52-0c4663080dfd","Type":"ContainerStarted","Data":"6a99978eb70366c0948d7ea588cdc12f24a955474c5dda9d87264f97aafe80f0"} Mar 13 01:35:44.655359 master-0 kubenswrapper[19803]: I0313 01:35:44.655335 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:44.657501 master-0 kubenswrapper[19803]: I0313 01:35:44.657433 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" event={"ID":"b64722e0-860a-4f39-bca0-51cae9911bc0","Type":"ContainerStarted","Data":"c6068f2d4a187faacc0513e17beb745a4f738315fd0255f6f2d10a653b6e469b"} Mar 13 01:35:44.657601 master-0 kubenswrapper[19803]: I0313 01:35:44.657528 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:35:44.660419 master-0 kubenswrapper[19803]: I0313 01:35:44.660391 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" event={"ID":"1f9d5bff-035e-4b19-946a-c8c49fd43ebb","Type":"ContainerStarted","Data":"292ee75e9a4b016ef586fffe12213b941e0a48d53b862c3ab445ff997f4325bb"} Mar 13 01:35:44.660500 master-0 kubenswrapper[19803]: I0313 01:35:44.660421 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" event={"ID":"1f9d5bff-035e-4b19-946a-c8c49fd43ebb","Type":"ContainerStarted","Data":"204b3e7cd0d40252c114bbfad935740c569eae1e9c66b954f3c352cb6b033440"} Mar 13 01:35:44.713493 master-0 kubenswrapper[19803]: I0313 01:35:44.713381 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-xc24l" podStartSLOduration=1.735853257 podStartE2EDuration="9.713360484s" podCreationTimestamp="2026-03-13 01:35:35 +0000 UTC" firstStartedPulling="2026-03-13 01:35:35.895252493 +0000 UTC m=+1083.860400172" lastFinishedPulling="2026-03-13 01:35:43.87275972 +0000 UTC m=+1091.837907399" observedRunningTime="2026-03-13 01:35:44.701500053 +0000 UTC m=+1092.666647742" watchObservedRunningTime="2026-03-13 01:35:44.713360484 +0000 UTC m=+1092.678508163" Mar 13 01:35:44.730207 master-0 kubenswrapper[19803]: I0313 01:35:44.730016 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" podStartSLOduration=1.765972664 podStartE2EDuration="11.729995243s" podCreationTimestamp="2026-03-13 01:35:33 +0000 UTC" firstStartedPulling="2026-03-13 01:35:34.016798819 +0000 UTC m=+1081.981946488" lastFinishedPulling="2026-03-13 01:35:43.980821378 +0000 UTC m=+1091.945969067" observedRunningTime="2026-03-13 01:35:44.72367831 +0000 UTC m=+1092.688825999" watchObservedRunningTime="2026-03-13 01:35:44.729995243 +0000 UTC m=+1092.695142922" Mar 13 01:35:44.758533 master-0 kubenswrapper[19803]: I0313 01:35:44.754518 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-9b8c8685d-tvqv9" podStartSLOduration=2.182264614 podStartE2EDuration="9.754431142s" podCreationTimestamp="2026-03-13 01:35:35 +0000 UTC" firstStartedPulling="2026-03-13 01:35:36.337462065 +0000 UTC m=+1084.302609744" lastFinishedPulling="2026-03-13 01:35:43.909628583 +0000 UTC m=+1091.874776272" observedRunningTime="2026-03-13 01:35:44.743751308 +0000 UTC m=+1092.708898987" watchObservedRunningTime="2026-03-13 01:35:44.754431142 +0000 UTC m=+1092.719578841" Mar 13 01:35:44.773169 master-0 kubenswrapper[19803]: I0313 01:35:44.773058 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" podStartSLOduration=2.276960839 podStartE2EDuration="9.773033627s" podCreationTimestamp="2026-03-13 01:35:35 +0000 UTC" firstStartedPulling="2026-03-13 01:35:36.40811748 +0000 UTC m=+1084.373265179" lastFinishedPulling="2026-03-13 01:35:43.904190288 +0000 UTC m=+1091.869337967" observedRunningTime="2026-03-13 01:35:44.766059488 +0000 UTC m=+1092.731207177" watchObservedRunningTime="2026-03-13 01:35:44.773033627 +0000 UTC m=+1092.738181306" Mar 13 01:35:45.177989 master-0 kubenswrapper[19803]: I0313 01:35:45.177924 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-7p9lv" Mar 13 01:35:45.678861 master-0 kubenswrapper[19803]: I0313 01:35:45.678505 19803 generic.go:334] "Generic (PLEG): container finished" podID="5555aed3-8836-40c7-a55a-ff3708f816e5" containerID="b4bdb6f7669af2452175d47d0bbaaa75be86936f9b47c92f717946d44ddd619d" exitCode=0 Mar 13 01:35:45.678861 master-0 kubenswrapper[19803]: I0313 01:35:45.678641 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerDied","Data":"b4bdb6f7669af2452175d47d0bbaaa75be86936f9b47c92f717946d44ddd619d"} Mar 13 01:35:45.686802 master-0 kubenswrapper[19803]: I0313 01:35:45.686243 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" event={"ID":"72706d51-8596-4a52-88bd-d994a8baad33","Type":"ContainerStarted","Data":"1e58759585d801ba8ef156606be011260dc572885689db1d783297cc56e1d158"} Mar 13 01:35:45.734958 master-0 kubenswrapper[19803]: I0313 01:35:45.734340 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-86f58fcf4-xjq7n" podStartSLOduration=3.2850921 podStartE2EDuration="10.734320128s" podCreationTimestamp="2026-03-13 01:35:35 +0000 UTC" firstStartedPulling="2026-03-13 01:35:37.899066841 +0000 UTC m=+1085.864214510" lastFinishedPulling="2026-03-13 01:35:45.348294839 +0000 UTC m=+1093.313442538" observedRunningTime="2026-03-13 01:35:45.727535013 +0000 UTC m=+1093.692682732" watchObservedRunningTime="2026-03-13 01:35:45.734320128 +0000 UTC m=+1093.699467797" Mar 13 01:35:46.177351 master-0 kubenswrapper[19803]: I0313 01:35:46.177157 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:46.177351 master-0 kubenswrapper[19803]: I0313 01:35:46.177283 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:46.187004 master-0 kubenswrapper[19803]: I0313 01:35:46.186852 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:46.712146 master-0 kubenswrapper[19803]: I0313 01:35:46.712042 19803 generic.go:334] "Generic (PLEG): container finished" podID="5555aed3-8836-40c7-a55a-ff3708f816e5" containerID="0ddc03a2533d175066ec719ec58b9d6fe9cc6fe2f7c82ca784cca9229fba8e15" exitCode=0 Mar 13 01:35:46.713338 master-0 kubenswrapper[19803]: I0313 01:35:46.712255 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerDied","Data":"0ddc03a2533d175066ec719ec58b9d6fe9cc6fe2f7c82ca784cca9229fba8e15"} Mar 13 01:35:46.718724 master-0 kubenswrapper[19803]: I0313 01:35:46.718646 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-7887658d99-sfwrp" Mar 13 01:35:46.869898 master-0 kubenswrapper[19803]: I0313 01:35:46.869677 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59c5b4f6c8-xvqg6"] Mar 13 01:35:47.735433 master-0 kubenswrapper[19803]: I0313 01:35:47.734624 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerStarted","Data":"79f704fbd7fb3b24fa704ec814dde939ddfc938975384707055e8c4c464fb746"} Mar 13 01:35:47.735433 master-0 kubenswrapper[19803]: I0313 01:35:47.734688 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerStarted","Data":"d85d19733532594de48bef4d16697a4f95807c0311c9709aef7112616f4b1dee"} Mar 13 01:35:47.735433 master-0 kubenswrapper[19803]: I0313 01:35:47.734705 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerStarted","Data":"fd2de20351544a30f47ce14bc2e0e546ec3edb17927a2fd8ddf20ff518548263"} Mar 13 01:35:47.735433 master-0 kubenswrapper[19803]: I0313 01:35:47.734715 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerStarted","Data":"33333663370eb86f94b1770431bf5a859e742ddc19a2fe69736fecaa5d56275e"} Mar 13 01:35:47.735433 master-0 kubenswrapper[19803]: I0313 01:35:47.734725 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerStarted","Data":"928fcd66e7df9cdc7b3727028f4b8de646f8e08cb13495d30eb9fe30f0dbaed8"} Mar 13 01:35:48.758892 master-0 kubenswrapper[19803]: I0313 01:35:48.758786 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-dc7jx" event={"ID":"5555aed3-8836-40c7-a55a-ff3708f816e5","Type":"ContainerStarted","Data":"be8ad6f8ea3b61628d3ebeacb21c47021cdc87b63e1ae63ed4f4142ab7c6094d"} Mar 13 01:35:48.759432 master-0 kubenswrapper[19803]: I0313 01:35:48.759060 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:48.822317 master-0 kubenswrapper[19803]: I0313 01:35:48.822190 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-dc7jx" podStartSLOduration=5.661289139 podStartE2EDuration="15.822152516s" podCreationTimestamp="2026-03-13 01:35:33 +0000 UTC" firstStartedPulling="2026-03-13 01:35:33.718931384 +0000 UTC m=+1081.684079063" lastFinishedPulling="2026-03-13 01:35:43.879794761 +0000 UTC m=+1091.844942440" observedRunningTime="2026-03-13 01:35:48.797416199 +0000 UTC m=+1096.762563918" watchObservedRunningTime="2026-03-13 01:35:48.822152516 +0000 UTC m=+1096.787300235" Mar 13 01:35:50.847418 master-0 kubenswrapper[19803]: I0313 01:35:50.847374 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-xc24l" Mar 13 01:35:53.594038 master-0 kubenswrapper[19803]: I0313 01:35:53.593956 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:53.656608 master-0 kubenswrapper[19803]: I0313 01:35:53.653382 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:35:53.699676 master-0 kubenswrapper[19803]: I0313 01:35:53.699348 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-7bb4cc7c98-lzxmk" Mar 13 01:35:55.814755 master-0 kubenswrapper[19803]: I0313 01:35:55.814685 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-5f558f5558-8f7rs" Mar 13 01:36:00.414785 master-0 kubenswrapper[19803]: I0313 01:36:00.414727 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-storage/vg-manager-n4wbn"] Mar 13 01:36:00.416938 master-0 kubenswrapper[19803]: I0313 01:36:00.416911 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.419443 master-0 kubenswrapper[19803]: I0313 01:36:00.419386 19803 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-storage"/"vg-manager-metrics-cert" Mar 13 01:36:00.431648 master-0 kubenswrapper[19803]: I0313 01:36:00.431581 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-n4wbn"] Mar 13 01:36:00.518788 master-0 kubenswrapper[19803]: I0313 01:36:00.518673 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-csi-plugin-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519081 master-0 kubenswrapper[19803]: I0313 01:36:00.518844 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-device-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519081 master-0 kubenswrapper[19803]: I0313 01:36:00.518886 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-node-plugin-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519081 master-0 kubenswrapper[19803]: I0313 01:36:00.518911 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-pod-volumes-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519081 master-0 kubenswrapper[19803]: I0313 01:36:00.518966 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-registration-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519260 master-0 kubenswrapper[19803]: I0313 01:36:00.519096 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-file-lock-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519604 master-0 kubenswrapper[19803]: I0313 01:36:00.519524 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/df402717-11fa-4f28-96a2-beecc3c5ccc4-metrics-cert\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519696 master-0 kubenswrapper[19803]: I0313 01:36:00.519630 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-sys\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519816 master-0 kubenswrapper[19803]: I0313 01:36:00.519768 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-run-udev\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519887 master-0 kubenswrapper[19803]: I0313 01:36:00.519851 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-lvmd-config\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.519930 master-0 kubenswrapper[19803]: I0313 01:36:00.519913 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27ghw\" (UniqueName: \"kubernetes.io/projected/df402717-11fa-4f28-96a2-beecc3c5ccc4-kube-api-access-27ghw\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.622620 master-0 kubenswrapper[19803]: I0313 01:36:00.622493 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-csi-plugin-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.622827 master-0 kubenswrapper[19803]: I0313 01:36:00.622689 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-device-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.622827 master-0 kubenswrapper[19803]: I0313 01:36:00.622756 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-node-plugin-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.622827 master-0 kubenswrapper[19803]: I0313 01:36:00.622791 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-pod-volumes-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.622932 master-0 kubenswrapper[19803]: I0313 01:36:00.622866 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-registration-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.622969 master-0 kubenswrapper[19803]: I0313 01:36:00.622903 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"device-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-device-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623003 master-0 kubenswrapper[19803]: I0313 01:36:00.622977 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-file-lock-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623082 master-0 kubenswrapper[19803]: I0313 01:36:00.623054 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/df402717-11fa-4f28-96a2-beecc3c5ccc4-metrics-cert\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623124 master-0 kubenswrapper[19803]: I0313 01:36:00.623101 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-sys\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623164 master-0 kubenswrapper[19803]: I0313 01:36:00.623135 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-registration-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623164 master-0 kubenswrapper[19803]: I0313 01:36:00.623150 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-run-udev\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623229 master-0 kubenswrapper[19803]: I0313 01:36:00.623197 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-lvmd-config\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623265 master-0 kubenswrapper[19803]: I0313 01:36:00.623244 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27ghw\" (UniqueName: \"kubernetes.io/projected/df402717-11fa-4f28-96a2-beecc3c5ccc4-kube-api-access-27ghw\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623610 master-0 kubenswrapper[19803]: I0313 01:36:00.623497 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-node-plugin-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623781 master-0 kubenswrapper[19803]: I0313 01:36:00.623548 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-plugin-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-csi-plugin-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623781 master-0 kubenswrapper[19803]: I0313 01:36:00.623054 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-volumes-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-pod-volumes-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623855 master-0 kubenswrapper[19803]: I0313 01:36:00.623760 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-sys\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.623889 master-0 kubenswrapper[19803]: I0313 01:36:00.623854 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-udev\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-run-udev\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.624072 master-0 kubenswrapper[19803]: I0313 01:36:00.624045 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"file-lock-dir\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-file-lock-dir\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.624359 master-0 kubenswrapper[19803]: I0313 01:36:00.624300 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lvmd-config\" (UniqueName: \"kubernetes.io/host-path/df402717-11fa-4f28-96a2-beecc3c5ccc4-lvmd-config\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.629430 master-0 kubenswrapper[19803]: I0313 01:36:00.629361 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-cert\" (UniqueName: \"kubernetes.io/secret/df402717-11fa-4f28-96a2-beecc3c5ccc4-metrics-cert\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.647451 master-0 kubenswrapper[19803]: I0313 01:36:00.647367 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27ghw\" (UniqueName: \"kubernetes.io/projected/df402717-11fa-4f28-96a2-beecc3c5ccc4-kube-api-access-27ghw\") pod \"vg-manager-n4wbn\" (UID: \"df402717-11fa-4f28-96a2-beecc3c5ccc4\") " pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:00.749668 master-0 kubenswrapper[19803]: I0313 01:36:00.749607 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:01.220241 master-0 kubenswrapper[19803]: W0313 01:36:01.220155 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddf402717_11fa_4f28_96a2_beecc3c5ccc4.slice/crio-e3a6787308bd629c036eb2c27208847f1c741187d862abde499b5f002a8327b0 WatchSource:0}: Error finding container e3a6787308bd629c036eb2c27208847f1c741187d862abde499b5f002a8327b0: Status 404 returned error can't find the container with id e3a6787308bd629c036eb2c27208847f1c741187d862abde499b5f002a8327b0 Mar 13 01:36:01.222685 master-0 kubenswrapper[19803]: I0313 01:36:01.222638 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-storage/vg-manager-n4wbn"] Mar 13 01:36:01.924462 master-0 kubenswrapper[19803]: I0313 01:36:01.924148 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-n4wbn" event={"ID":"df402717-11fa-4f28-96a2-beecc3c5ccc4","Type":"ContainerStarted","Data":"a29240e8cc2ca2af32d5e788c08c9755972859ffd8c9cbc09c8613e4bf534ff9"} Mar 13 01:36:01.924462 master-0 kubenswrapper[19803]: I0313 01:36:01.924213 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-n4wbn" event={"ID":"df402717-11fa-4f28-96a2-beecc3c5ccc4","Type":"ContainerStarted","Data":"e3a6787308bd629c036eb2c27208847f1c741187d862abde499b5f002a8327b0"} Mar 13 01:36:01.959484 master-0 kubenswrapper[19803]: I0313 01:36:01.959379 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-storage/vg-manager-n4wbn" podStartSLOduration=1.959356497 podStartE2EDuration="1.959356497s" podCreationTimestamp="2026-03-13 01:36:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:36:01.943176717 +0000 UTC m=+1109.908324436" watchObservedRunningTime="2026-03-13 01:36:01.959356497 +0000 UTC m=+1109.924504206" Mar 13 01:36:03.558604 master-0 kubenswrapper[19803]: I0313 01:36:03.558546 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-bcc4b6f68-gz7lk" Mar 13 01:36:03.598496 master-0 kubenswrapper[19803]: I0313 01:36:03.598176 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-dc7jx" Mar 13 01:36:03.957264 master-0 kubenswrapper[19803]: I0313 01:36:03.957136 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-n4wbn_df402717-11fa-4f28-96a2-beecc3c5ccc4/vg-manager/0.log" Mar 13 01:36:03.957264 master-0 kubenswrapper[19803]: I0313 01:36:03.957208 19803 generic.go:334] "Generic (PLEG): container finished" podID="df402717-11fa-4f28-96a2-beecc3c5ccc4" containerID="a29240e8cc2ca2af32d5e788c08c9755972859ffd8c9cbc09c8613e4bf534ff9" exitCode=1 Mar 13 01:36:03.957489 master-0 kubenswrapper[19803]: I0313 01:36:03.957259 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-n4wbn" event={"ID":"df402717-11fa-4f28-96a2-beecc3c5ccc4","Type":"ContainerDied","Data":"a29240e8cc2ca2af32d5e788c08c9755972859ffd8c9cbc09c8613e4bf534ff9"} Mar 13 01:36:03.958336 master-0 kubenswrapper[19803]: I0313 01:36:03.958267 19803 scope.go:117] "RemoveContainer" containerID="a29240e8cc2ca2af32d5e788c08c9755972859ffd8c9cbc09c8613e4bf534ff9" Mar 13 01:36:04.333561 master-0 kubenswrapper[19803]: I0313 01:36:04.333460 19803 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock" Mar 13 01:36:05.018753 master-0 kubenswrapper[19803]: I0313 01:36:05.018686 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-n4wbn_df402717-11fa-4f28-96a2-beecc3c5ccc4/vg-manager/0.log" Mar 13 01:36:05.019692 master-0 kubenswrapper[19803]: I0313 01:36:05.018769 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-storage/vg-manager-n4wbn" event={"ID":"df402717-11fa-4f28-96a2-beecc3c5ccc4","Type":"ContainerStarted","Data":"4808e4c572f1352e025e31e50118834dbba997f422592bcb31a8f040a175ad1d"} Mar 13 01:36:05.032918 master-0 kubenswrapper[19803]: I0313 01:36:05.032727 19803 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/topolvm.io-reg.sock","Timestamp":"2026-03-13T01:36:04.333530454Z","Handler":null,"Name":""} Mar 13 01:36:05.038449 master-0 kubenswrapper[19803]: I0313 01:36:05.038306 19803 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: topolvm.io endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock versions: 1.0.0 Mar 13 01:36:05.038449 master-0 kubenswrapper[19803]: I0313 01:36:05.038389 19803 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: topolvm.io at endpoint: /var/lib/kubelet/plugins/topolvm.io/node/csi-topolvm.sock Mar 13 01:36:10.750985 master-0 kubenswrapper[19803]: I0313 01:36:10.750907 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:10.754198 master-0 kubenswrapper[19803]: I0313 01:36:10.754141 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:11.103288 master-0 kubenswrapper[19803]: I0313 01:36:11.103085 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:11.105378 master-0 kubenswrapper[19803]: I0313 01:36:11.105269 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-storage/vg-manager-n4wbn" Mar 13 01:36:11.938689 master-0 kubenswrapper[19803]: I0313 01:36:11.938574 19803 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-59c5b4f6c8-xvqg6" podUID="43107d0a-efa1-46b4-b0ae-8029f21b46ad" containerName="console" containerID="cri-o://625c63aaa079ea37a3add2c597ae342fdf4ce128aac041cabf25a70180fc9340" gracePeriod=15 Mar 13 01:36:12.125111 master-0 kubenswrapper[19803]: I0313 01:36:12.125024 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59c5b4f6c8-xvqg6_43107d0a-efa1-46b4-b0ae-8029f21b46ad/console/0.log" Mar 13 01:36:12.125111 master-0 kubenswrapper[19803]: I0313 01:36:12.125109 19803 generic.go:334] "Generic (PLEG): container finished" podID="43107d0a-efa1-46b4-b0ae-8029f21b46ad" containerID="625c63aaa079ea37a3add2c597ae342fdf4ce128aac041cabf25a70180fc9340" exitCode=2 Mar 13 01:36:12.128531 master-0 kubenswrapper[19803]: I0313 01:36:12.126445 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59c5b4f6c8-xvqg6" event={"ID":"43107d0a-efa1-46b4-b0ae-8029f21b46ad","Type":"ContainerDied","Data":"625c63aaa079ea37a3add2c597ae342fdf4ce128aac041cabf25a70180fc9340"} Mar 13 01:36:12.503000 master-0 kubenswrapper[19803]: I0313 01:36:12.500862 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59c5b4f6c8-xvqg6_43107d0a-efa1-46b4-b0ae-8029f21b46ad/console/0.log" Mar 13 01:36:12.503000 master-0 kubenswrapper[19803]: I0313 01:36:12.501008 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:36:12.657293 master-0 kubenswrapper[19803]: I0313 01:36:12.657201 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l86vt\" (UniqueName: \"kubernetes.io/projected/43107d0a-efa1-46b4-b0ae-8029f21b46ad-kube-api-access-l86vt\") pod \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " Mar 13 01:36:12.657617 master-0 kubenswrapper[19803]: I0313 01:36:12.657385 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-oauth-config\") pod \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " Mar 13 01:36:12.657617 master-0 kubenswrapper[19803]: I0313 01:36:12.657437 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-trusted-ca-bundle\") pod \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " Mar 13 01:36:12.657617 master-0 kubenswrapper[19803]: I0313 01:36:12.657574 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-oauth-serving-cert\") pod \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " Mar 13 01:36:12.657779 master-0 kubenswrapper[19803]: I0313 01:36:12.657679 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-config\") pod \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " Mar 13 01:36:12.657779 master-0 kubenswrapper[19803]: I0313 01:36:12.657718 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-service-ca\") pod \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " Mar 13 01:36:12.657873 master-0 kubenswrapper[19803]: I0313 01:36:12.657809 19803 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-serving-cert\") pod \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\" (UID: \"43107d0a-efa1-46b4-b0ae-8029f21b46ad\") " Mar 13 01:36:12.658540 master-0 kubenswrapper[19803]: I0313 01:36:12.658413 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43107d0a-efa1-46b4-b0ae-8029f21b46ad" (UID: "43107d0a-efa1-46b4-b0ae-8029f21b46ad"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:36:12.658776 master-0 kubenswrapper[19803]: I0313 01:36:12.658697 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43107d0a-efa1-46b4-b0ae-8029f21b46ad" (UID: "43107d0a-efa1-46b4-b0ae-8029f21b46ad"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:36:12.658859 master-0 kubenswrapper[19803]: I0313 01:36:12.658722 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-service-ca" (OuterVolumeSpecName: "service-ca") pod "43107d0a-efa1-46b4-b0ae-8029f21b46ad" (UID: "43107d0a-efa1-46b4-b0ae-8029f21b46ad"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:36:12.658859 master-0 kubenswrapper[19803]: I0313 01:36:12.658806 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-config" (OuterVolumeSpecName: "console-config") pod "43107d0a-efa1-46b4-b0ae-8029f21b46ad" (UID: "43107d0a-efa1-46b4-b0ae-8029f21b46ad"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Mar 13 01:36:12.661246 master-0 kubenswrapper[19803]: I0313 01:36:12.661188 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43107d0a-efa1-46b4-b0ae-8029f21b46ad" (UID: "43107d0a-efa1-46b4-b0ae-8029f21b46ad"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:36:12.662448 master-0 kubenswrapper[19803]: I0313 01:36:12.662391 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43107d0a-efa1-46b4-b0ae-8029f21b46ad-kube-api-access-l86vt" (OuterVolumeSpecName: "kube-api-access-l86vt") pod "43107d0a-efa1-46b4-b0ae-8029f21b46ad" (UID: "43107d0a-efa1-46b4-b0ae-8029f21b46ad"). InnerVolumeSpecName "kube-api-access-l86vt". PluginName "kubernetes.io/projected", VolumeGidValue "" Mar 13 01:36:12.662585 master-0 kubenswrapper[19803]: I0313 01:36:12.662493 19803 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43107d0a-efa1-46b4-b0ae-8029f21b46ad" (UID: "43107d0a-efa1-46b4-b0ae-8029f21b46ad"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Mar 13 01:36:12.760835 master-0 kubenswrapper[19803]: I0313 01:36:12.760682 19803 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l86vt\" (UniqueName: \"kubernetes.io/projected/43107d0a-efa1-46b4-b0ae-8029f21b46ad-kube-api-access-l86vt\") on node \"master-0\" DevicePath \"\"" Mar 13 01:36:12.760835 master-0 kubenswrapper[19803]: I0313 01:36:12.760745 19803 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-oauth-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:36:12.760835 master-0 kubenswrapper[19803]: I0313 01:36:12.760759 19803 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-trusted-ca-bundle\") on node \"master-0\" DevicePath \"\"" Mar 13 01:36:12.760835 master-0 kubenswrapper[19803]: I0313 01:36:12.760770 19803 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-oauth-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:36:12.760835 master-0 kubenswrapper[19803]: I0313 01:36:12.760783 19803 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-config\") on node \"master-0\" DevicePath \"\"" Mar 13 01:36:12.760835 master-0 kubenswrapper[19803]: I0313 01:36:12.760794 19803 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43107d0a-efa1-46b4-b0ae-8029f21b46ad-service-ca\") on node \"master-0\" DevicePath \"\"" Mar 13 01:36:12.760835 master-0 kubenswrapper[19803]: I0313 01:36:12.760805 19803 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43107d0a-efa1-46b4-b0ae-8029f21b46ad-console-serving-cert\") on node \"master-0\" DevicePath \"\"" Mar 13 01:36:13.159645 master-0 kubenswrapper[19803]: I0313 01:36:13.159312 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-59c5b4f6c8-xvqg6_43107d0a-efa1-46b4-b0ae-8029f21b46ad/console/0.log" Mar 13 01:36:13.160290 master-0 kubenswrapper[19803]: I0313 01:36:13.159734 19803 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-59c5b4f6c8-xvqg6" Mar 13 01:36:13.160290 master-0 kubenswrapper[19803]: I0313 01:36:13.159844 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-59c5b4f6c8-xvqg6" event={"ID":"43107d0a-efa1-46b4-b0ae-8029f21b46ad","Type":"ContainerDied","Data":"0c6752354f3eae6ff880a852d4ed1cbe8033adb351dd1f3fff35520473017989"} Mar 13 01:36:13.168067 master-0 kubenswrapper[19803]: I0313 01:36:13.168007 19803 scope.go:117] "RemoveContainer" containerID="625c63aaa079ea37a3add2c597ae342fdf4ce128aac041cabf25a70180fc9340" Mar 13 01:36:13.220941 master-0 kubenswrapper[19803]: I0313 01:36:13.220864 19803 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-59c5b4f6c8-xvqg6"] Mar 13 01:36:13.236532 master-0 kubenswrapper[19803]: I0313 01:36:13.236447 19803 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-59c5b4f6c8-xvqg6"] Mar 13 01:36:13.260791 master-0 kubenswrapper[19803]: I0313 01:36:13.260708 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-8tblf"] Mar 13 01:36:13.261206 master-0 kubenswrapper[19803]: E0313 01:36:13.261182 19803 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="43107d0a-efa1-46b4-b0ae-8029f21b46ad" containerName="console" Mar 13 01:36:13.261263 master-0 kubenswrapper[19803]: I0313 01:36:13.261213 19803 state_mem.go:107] "Deleted CPUSet assignment" podUID="43107d0a-efa1-46b4-b0ae-8029f21b46ad" containerName="console" Mar 13 01:36:13.261448 master-0 kubenswrapper[19803]: I0313 01:36:13.261379 19803 memory_manager.go:354] "RemoveStaleState removing state" podUID="43107d0a-efa1-46b4-b0ae-8029f21b46ad" containerName="console" Mar 13 01:36:13.262787 master-0 kubenswrapper[19803]: I0313 01:36:13.262088 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:13.264489 master-0 kubenswrapper[19803]: I0313 01:36:13.264265 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Mar 13 01:36:13.266596 master-0 kubenswrapper[19803]: I0313 01:36:13.265613 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Mar 13 01:36:13.272856 master-0 kubenswrapper[19803]: I0313 01:36:13.272804 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8tblf"] Mar 13 01:36:13.372061 master-0 kubenswrapper[19803]: I0313 01:36:13.371994 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5h6p\" (UniqueName: \"kubernetes.io/projected/f63f49fb-5b13-4480-9278-9aca536f856c-kube-api-access-n5h6p\") pod \"openstack-operator-index-8tblf\" (UID: \"f63f49fb-5b13-4480-9278-9aca536f856c\") " pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:13.477618 master-0 kubenswrapper[19803]: I0313 01:36:13.474004 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5h6p\" (UniqueName: \"kubernetes.io/projected/f63f49fb-5b13-4480-9278-9aca536f856c-kube-api-access-n5h6p\") pod \"openstack-operator-index-8tblf\" (UID: \"f63f49fb-5b13-4480-9278-9aca536f856c\") " pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:13.495305 master-0 kubenswrapper[19803]: I0313 01:36:13.495255 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5h6p\" (UniqueName: \"kubernetes.io/projected/f63f49fb-5b13-4480-9278-9aca536f856c-kube-api-access-n5h6p\") pod \"openstack-operator-index-8tblf\" (UID: \"f63f49fb-5b13-4480-9278-9aca536f856c\") " pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:13.587498 master-0 kubenswrapper[19803]: I0313 01:36:13.587403 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:14.081949 master-0 kubenswrapper[19803]: I0313 01:36:14.081893 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-8tblf"] Mar 13 01:36:14.086476 master-0 kubenswrapper[19803]: W0313 01:36:14.086425 19803 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf63f49fb_5b13_4480_9278_9aca536f856c.slice/crio-9ca75e851d3926a151c30b3a985bfd6153f841c798b5ed27c243ab2a8fa8e0ff WatchSource:0}: Error finding container 9ca75e851d3926a151c30b3a985bfd6153f841c798b5ed27c243ab2a8fa8e0ff: Status 404 returned error can't find the container with id 9ca75e851d3926a151c30b3a985bfd6153f841c798b5ed27c243ab2a8fa8e0ff Mar 13 01:36:14.167987 master-0 kubenswrapper[19803]: I0313 01:36:14.167934 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8tblf" event={"ID":"f63f49fb-5b13-4480-9278-9aca536f856c","Type":"ContainerStarted","Data":"9ca75e851d3926a151c30b3a985bfd6153f841c798b5ed27c243ab2a8fa8e0ff"} Mar 13 01:36:14.340817 master-0 kubenswrapper[19803]: I0313 01:36:14.340502 19803 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43107d0a-efa1-46b4-b0ae-8029f21b46ad" path="/var/lib/kubelet/pods/43107d0a-efa1-46b4-b0ae-8029f21b46ad/volumes" Mar 13 01:36:16.210676 master-0 kubenswrapper[19803]: I0313 01:36:16.210482 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-8tblf" event={"ID":"f63f49fb-5b13-4480-9278-9aca536f856c","Type":"ContainerStarted","Data":"810dd944ceb2a0f30b9534bd4d44a25f90ae51059dd705d9745354e3b6c73d99"} Mar 13 01:36:16.243368 master-0 kubenswrapper[19803]: I0313 01:36:16.243261 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-8tblf" podStartSLOduration=1.389489344 podStartE2EDuration="3.243238759s" podCreationTimestamp="2026-03-13 01:36:13 +0000 UTC" firstStartedPulling="2026-03-13 01:36:14.088911712 +0000 UTC m=+1122.054059391" lastFinishedPulling="2026-03-13 01:36:15.942661117 +0000 UTC m=+1123.907808806" observedRunningTime="2026-03-13 01:36:16.23873201 +0000 UTC m=+1124.203879689" watchObservedRunningTime="2026-03-13 01:36:16.243238759 +0000 UTC m=+1124.208386458" Mar 13 01:36:23.588782 master-0 kubenswrapper[19803]: I0313 01:36:23.588673 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:23.588782 master-0 kubenswrapper[19803]: I0313 01:36:23.588807 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:23.639149 master-0 kubenswrapper[19803]: I0313 01:36:23.638713 19803 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:36:24.363252 master-0 kubenswrapper[19803]: I0313 01:36:24.363098 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-8tblf" Mar 13 01:41:24.698553 master-0 kubenswrapper[19803]: I0313 01:41:24.696695 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wkgf5/must-gather-q7kbn"] Mar 13 01:41:24.703765 master-0 kubenswrapper[19803]: I0313 01:41:24.698638 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:24.703765 master-0 kubenswrapper[19803]: I0313 01:41:24.701713 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wkgf5"/"openshift-service-ca.crt" Mar 13 01:41:24.703765 master-0 kubenswrapper[19803]: I0313 01:41:24.701787 19803 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-wkgf5"/"kube-root-ca.crt" Mar 13 01:41:24.716877 master-0 kubenswrapper[19803]: I0313 01:41:24.715794 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wkgf5/must-gather-9wxlr"] Mar 13 01:41:24.720524 master-0 kubenswrapper[19803]: I0313 01:41:24.717639 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:24.734695 master-0 kubenswrapper[19803]: I0313 01:41:24.733980 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkgf5/must-gather-q7kbn"] Mar 13 01:41:24.740785 master-0 kubenswrapper[19803]: I0313 01:41:24.740729 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkgf5/must-gather-9wxlr"] Mar 13 01:41:24.803353 master-0 kubenswrapper[19803]: I0313 01:41:24.801330 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gzdwt\" (UniqueName: \"kubernetes.io/projected/a1ee4e1f-42e4-4bb3-b499-c76750847648-kube-api-access-gzdwt\") pod \"must-gather-9wxlr\" (UID: \"a1ee4e1f-42e4-4bb3-b499-c76750847648\") " pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:24.803353 master-0 kubenswrapper[19803]: I0313 01:41:24.801421 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a523d761-19cc-436d-a966-d59247a52370-must-gather-output\") pod \"must-gather-q7kbn\" (UID: \"a523d761-19cc-436d-a966-d59247a52370\") " pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:24.803353 master-0 kubenswrapper[19803]: I0313 01:41:24.801444 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1ee4e1f-42e4-4bb3-b499-c76750847648-must-gather-output\") pod \"must-gather-9wxlr\" (UID: \"a1ee4e1f-42e4-4bb3-b499-c76750847648\") " pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:24.803353 master-0 kubenswrapper[19803]: I0313 01:41:24.801478 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-khbp8\" (UniqueName: \"kubernetes.io/projected/a523d761-19cc-436d-a966-d59247a52370-kube-api-access-khbp8\") pod \"must-gather-q7kbn\" (UID: \"a523d761-19cc-436d-a966-d59247a52370\") " pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:24.902751 master-0 kubenswrapper[19803]: I0313 01:41:24.902689 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gzdwt\" (UniqueName: \"kubernetes.io/projected/a1ee4e1f-42e4-4bb3-b499-c76750847648-kube-api-access-gzdwt\") pod \"must-gather-9wxlr\" (UID: \"a1ee4e1f-42e4-4bb3-b499-c76750847648\") " pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:24.902982 master-0 kubenswrapper[19803]: I0313 01:41:24.902791 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a523d761-19cc-436d-a966-d59247a52370-must-gather-output\") pod \"must-gather-q7kbn\" (UID: \"a523d761-19cc-436d-a966-d59247a52370\") " pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:24.902982 master-0 kubenswrapper[19803]: I0313 01:41:24.902823 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1ee4e1f-42e4-4bb3-b499-c76750847648-must-gather-output\") pod \"must-gather-9wxlr\" (UID: \"a1ee4e1f-42e4-4bb3-b499-c76750847648\") " pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:24.902982 master-0 kubenswrapper[19803]: I0313 01:41:24.902856 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-khbp8\" (UniqueName: \"kubernetes.io/projected/a523d761-19cc-436d-a966-d59247a52370-kube-api-access-khbp8\") pod \"must-gather-q7kbn\" (UID: \"a523d761-19cc-436d-a966-d59247a52370\") " pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:24.904016 master-0 kubenswrapper[19803]: I0313 01:41:24.903978 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a523d761-19cc-436d-a966-d59247a52370-must-gather-output\") pod \"must-gather-q7kbn\" (UID: \"a523d761-19cc-436d-a966-d59247a52370\") " pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:24.904257 master-0 kubenswrapper[19803]: I0313 01:41:24.904213 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a1ee4e1f-42e4-4bb3-b499-c76750847648-must-gather-output\") pod \"must-gather-9wxlr\" (UID: \"a1ee4e1f-42e4-4bb3-b499-c76750847648\") " pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:24.932533 master-0 kubenswrapper[19803]: I0313 01:41:24.926287 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-khbp8\" (UniqueName: \"kubernetes.io/projected/a523d761-19cc-436d-a966-d59247a52370-kube-api-access-khbp8\") pod \"must-gather-q7kbn\" (UID: \"a523d761-19cc-436d-a966-d59247a52370\") " pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:24.932533 master-0 kubenswrapper[19803]: I0313 01:41:24.928692 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gzdwt\" (UniqueName: \"kubernetes.io/projected/a1ee4e1f-42e4-4bb3-b499-c76750847648-kube-api-access-gzdwt\") pod \"must-gather-9wxlr\" (UID: \"a1ee4e1f-42e4-4bb3-b499-c76750847648\") " pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:25.054051 master-0 kubenswrapper[19803]: I0313 01:41:25.053946 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkgf5/must-gather-q7kbn" Mar 13 01:41:25.074169 master-0 kubenswrapper[19803]: I0313 01:41:25.074042 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkgf5/must-gather-9wxlr" Mar 13 01:41:25.531911 master-0 kubenswrapper[19803]: I0313 01:41:25.531698 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkgf5/must-gather-9wxlr"] Mar 13 01:41:25.541774 master-0 kubenswrapper[19803]: I0313 01:41:25.539625 19803 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Mar 13 01:41:25.618616 master-0 kubenswrapper[19803]: I0313 01:41:25.618539 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkgf5/must-gather-q7kbn"] Mar 13 01:41:25.917186 master-0 kubenswrapper[19803]: I0313 01:41:25.916970 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/must-gather-q7kbn" event={"ID":"a523d761-19cc-436d-a966-d59247a52370","Type":"ContainerStarted","Data":"4704085c24973329ab969c18699b9d1f63a70f6bbaa5d42dfb725a17ba644ae0"} Mar 13 01:41:25.919242 master-0 kubenswrapper[19803]: I0313 01:41:25.919150 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/must-gather-9wxlr" event={"ID":"a1ee4e1f-42e4-4bb3-b499-c76750847648","Type":"ContainerStarted","Data":"cf24be1df56f7764d7c6fa584a16a345fb17744b5410d3cfbed070620bdbadaa"} Mar 13 01:41:27.978650 master-0 kubenswrapper[19803]: I0313 01:41:27.976656 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/must-gather-q7kbn" event={"ID":"a523d761-19cc-436d-a966-d59247a52370","Type":"ContainerStarted","Data":"048d42a0110a734408b7dc82bfcd1d5bee6ad24d7d0054b1ee2afe3e9eae160a"} Mar 13 01:41:27.978650 master-0 kubenswrapper[19803]: I0313 01:41:27.976729 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/must-gather-q7kbn" event={"ID":"a523d761-19cc-436d-a966-d59247a52370","Type":"ContainerStarted","Data":"dd40ddd28e8fbc3e2ec862348337ac62a26ce1d53adace63000f6b0f314213b9"} Mar 13 01:41:29.851650 master-0 kubenswrapper[19803]: I0313 01:41:29.851453 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-jzj9v_b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/cluster-version-operator/1.log" Mar 13 01:41:29.976117 master-0 kubenswrapper[19803]: I0313 01:41:29.976040 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-version_cluster-version-operator-8c9c967c7-jzj9v_b3bf9dde-ca5b-46b8-883c-51e88ddf52e1/cluster-version-operator/0.log" Mar 13 01:41:32.388628 master-0 kubenswrapper[19803]: I0313 01:41:32.388537 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wkgf5/must-gather-q7kbn" podStartSLOduration=6.852525036 podStartE2EDuration="8.388517703s" podCreationTimestamp="2026-03-13 01:41:24 +0000 UTC" firstStartedPulling="2026-03-13 01:41:25.624148496 +0000 UTC m=+1433.589296185" lastFinishedPulling="2026-03-13 01:41:27.160141183 +0000 UTC m=+1435.125288852" observedRunningTime="2026-03-13 01:41:28.110290865 +0000 UTC m=+1436.075438564" watchObservedRunningTime="2026-03-13 01:41:32.388517703 +0000 UTC m=+1440.353665392" Mar 13 01:41:33.165220 master-0 kubenswrapper[19803]: I0313 01:41:33.165029 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-lzxmk_5a79b54b-b4c6-4a23-8818-6ee030e13899/controller/0.log" Mar 13 01:41:33.180735 master-0 kubenswrapper[19803]: I0313 01:41:33.180671 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-lzxmk_5a79b54b-b4c6-4a23-8818-6ee030e13899/kube-rbac-proxy/0.log" Mar 13 01:41:33.250422 master-0 kubenswrapper[19803]: I0313 01:41:33.247410 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/controller/0.log" Mar 13 01:41:33.298162 master-0 kubenswrapper[19803]: I0313 01:41:33.297130 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/frr/0.log" Mar 13 01:41:33.307759 master-0 kubenswrapper[19803]: I0313 01:41:33.307118 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/reloader/0.log" Mar 13 01:41:33.326530 master-0 kubenswrapper[19803]: I0313 01:41:33.325444 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/frr-metrics/0.log" Mar 13 01:41:33.332312 master-0 kubenswrapper[19803]: I0313 01:41:33.332265 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/kube-rbac-proxy/0.log" Mar 13 01:41:33.342538 master-0 kubenswrapper[19803]: I0313 01:41:33.342000 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/kube-rbac-proxy-frr/0.log" Mar 13 01:41:33.355574 master-0 kubenswrapper[19803]: I0313 01:41:33.353810 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-frr-files/0.log" Mar 13 01:41:33.363583 master-0 kubenswrapper[19803]: I0313 01:41:33.362186 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-reloader/0.log" Mar 13 01:41:33.372596 master-0 kubenswrapper[19803]: I0313 01:41:33.371966 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-metrics/0.log" Mar 13 01:41:33.384538 master-0 kubenswrapper[19803]: I0313 01:41:33.380789 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-gz7lk_26566375-fda5-4fbb-8e37-4901c404589e/frr-k8s-webhook-server/0.log" Mar 13 01:41:33.418602 master-0 kubenswrapper[19803]: I0313 01:41:33.417818 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6984bbdf9-qw42j_974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6/manager/0.log" Mar 13 01:41:33.435562 master-0 kubenswrapper[19803]: I0313 01:41:33.432959 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c89d777d4-h7xf2_36c14515-2f07-46ad-a5cd-1e81ccb8506e/webhook-server/0.log" Mar 13 01:41:33.566866 master-0 kubenswrapper[19803]: I0313 01:41:33.566820 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p9lv_a06d84dd-5485-4043-bd8d-332d3bb99fa3/speaker/0.log" Mar 13 01:41:33.581557 master-0 kubenswrapper[19803]: I0313 01:41:33.581482 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p9lv_a06d84dd-5485-4043-bd8d-332d3bb99fa3/kube-rbac-proxy/0.log" Mar 13 01:41:33.624529 master-0 kubenswrapper[19803]: I0313 01:41:33.624450 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-xjq7n_72706d51-8596-4a52-88bd-d994a8baad33/nmstate-console-plugin/0.log" Mar 13 01:41:33.648741 master-0 kubenswrapper[19803]: I0313 01:41:33.648705 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xc24l_1da4232b-d161-4e9d-9e52-0c4663080dfd/nmstate-handler/0.log" Mar 13 01:41:33.669735 master-0 kubenswrapper[19803]: I0313 01:41:33.669579 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-tvqv9_1f9d5bff-035e-4b19-946a-c8c49fd43ebb/nmstate-metrics/0.log" Mar 13 01:41:33.677182 master-0 kubenswrapper[19803]: I0313 01:41:33.677133 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-tvqv9_1f9d5bff-035e-4b19-946a-c8c49fd43ebb/kube-rbac-proxy/0.log" Mar 13 01:41:33.696833 master-0 kubenswrapper[19803]: I0313 01:41:33.696793 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-rjb7j_c7c96cc6-98a5-467b-aed4-c50790caa51e/nmstate-operator/0.log" Mar 13 01:41:33.716341 master-0 kubenswrapper[19803]: I0313 01:41:33.716259 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-8f7rs_b64722e0-860a-4f39-bca0-51cae9911bc0/nmstate-webhook/0.log" Mar 13 01:41:34.135083 master-0 kubenswrapper[19803]: I0313 01:41:34.135048 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 13 01:41:34.241816 master-0 kubenswrapper[19803]: I0313 01:41:34.241777 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 13 01:41:34.261098 master-0 kubenswrapper[19803]: I0313 01:41:34.261029 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 13 01:41:34.286152 master-0 kubenswrapper[19803]: I0313 01:41:34.286095 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 13 01:41:34.318530 master-0 kubenswrapper[19803]: I0313 01:41:34.317285 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 13 01:41:34.349103 master-0 kubenswrapper[19803]: I0313 01:41:34.347727 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 13 01:41:34.368047 master-0 kubenswrapper[19803]: I0313 01:41:34.367319 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 13 01:41:34.392592 master-0 kubenswrapper[19803]: I0313 01:41:34.392114 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 13 01:41:34.477253 master-0 kubenswrapper[19803]: I0313 01:41:34.475390 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_dfb4407e-71fc-4684-aded-cc84f7e306dc/installer/0.log" Mar 13 01:41:34.547204 master-0 kubenswrapper[19803]: I0313 01:41:34.547148 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_dd3a989f-6c19-4f5d-b14f-369ed9941051/installer/0.log" Mar 13 01:41:35.390450 master-0 kubenswrapper[19803]: I0313 01:41:35.388689 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/assisted-installer_assisted-installer-controller-qztx6_19460daa-7d22-4d32-899c-274b86c56a13/assisted-installer-controller/0.log" Mar 13 01:41:35.842040 master-0 kubenswrapper[19803]: I0313 01:41:35.839615 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575f7bbb59-ntckb_364e6da6-2cb4-48aa-b2b9-e4ed87bc90bf/oauth-openshift/0.log" Mar 13 01:41:36.669018 master-0 kubenswrapper[19803]: I0313 01:41:36.667153 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/2.log" Mar 13 01:41:36.699106 master-0 kubenswrapper[19803]: I0313 01:41:36.699037 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication-operator_authentication-operator-7c6989d6c4-plhx7_b5757329-8692-4719-b3c7-b5df78110fcf/authentication-operator/3.log" Mar 13 01:41:37.453097 master-0 kubenswrapper[19803]: I0313 01:41:37.453050 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress_router-default-79f8cd6fdd-kzq6q_0caabde8-d49a-431d-afe5-8b283188c11c/router/0.log" Mar 13 01:41:38.107905 master-0 kubenswrapper[19803]: I0313 01:41:38.107817 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/must-gather-9wxlr" event={"ID":"a1ee4e1f-42e4-4bb3-b499-c76750847648","Type":"ContainerStarted","Data":"a11a16b0f5a73166215b5741aa0e9b9f6cebbc330d6e2ff42164bfb6be28ff48"} Mar 13 01:41:38.108248 master-0 kubenswrapper[19803]: I0313 01:41:38.107916 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/must-gather-9wxlr" event={"ID":"a1ee4e1f-42e4-4bb3-b499-c76750847648","Type":"ContainerStarted","Data":"8473e70974e30db31ef67f828cba2f39b0541dd7e428f66dcfacde39ff8eb9cc"} Mar 13 01:41:38.137092 master-0 kubenswrapper[19803]: I0313 01:41:38.136999 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wkgf5/must-gather-9wxlr" podStartSLOduration=2.454984776 podStartE2EDuration="14.136978947s" podCreationTimestamp="2026-03-13 01:41:24 +0000 UTC" firstStartedPulling="2026-03-13 01:41:25.539535679 +0000 UTC m=+1433.504683358" lastFinishedPulling="2026-03-13 01:41:37.22152985 +0000 UTC m=+1445.186677529" observedRunningTime="2026-03-13 01:41:38.129988889 +0000 UTC m=+1446.095136568" watchObservedRunningTime="2026-03-13 01:41:38.136978947 +0000 UTC m=+1446.102126636" Mar 13 01:41:38.206141 master-0 kubenswrapper[19803]: I0313 01:41:38.206066 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c84d45cdc-rj5st_536a2de1-e13c-47d1-b61d-88e0a5fd2851/oauth-apiserver/0.log" Mar 13 01:41:38.219750 master-0 kubenswrapper[19803]: I0313 01:41:38.219695 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-oauth-apiserver_apiserver-c84d45cdc-rj5st_536a2de1-e13c-47d1-b61d-88e0a5fd2851/fix-audit-permissions/0.log" Mar 13 01:41:38.802222 master-0 kubenswrapper[19803]: I0313 01:41:38.802150 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9_2581e5b5-8cbb-4fa5-9888-98fb572a6232/kube-rbac-proxy/0.log" Mar 13 01:41:38.834730 master-0 kubenswrapper[19803]: I0313 01:41:38.834650 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9_2581e5b5-8cbb-4fa5-9888-98fb572a6232/cluster-autoscaler-operator/0.log" Mar 13 01:41:38.854963 master-0 kubenswrapper[19803]: I0313 01:41:38.854907 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/2.log" Mar 13 01:41:38.855928 master-0 kubenswrapper[19803]: I0313 01:41:38.855879 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/3.log" Mar 13 01:41:38.872133 master-0 kubenswrapper[19803]: I0313 01:41:38.872079 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/baremetal-kube-rbac-proxy/0.log" Mar 13 01:41:38.893701 master-0 kubenswrapper[19803]: I0313 01:41:38.893647 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6_56e20b21-ba17-46ae-a740-0e7bd45eae5f/control-plane-machine-set-operator/0.log" Mar 13 01:41:38.893963 master-0 kubenswrapper[19803]: I0313 01:41:38.893801 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6_56e20b21-ba17-46ae-a740-0e7bd45eae5f/control-plane-machine-set-operator/1.log" Mar 13 01:41:38.914521 master-0 kubenswrapper[19803]: I0313 01:41:38.914465 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/kube-rbac-proxy/0.log" Mar 13 01:41:38.931972 master-0 kubenswrapper[19803]: I0313 01:41:38.931879 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/machine-api-operator/0.log" Mar 13 01:41:38.933345 master-0 kubenswrapper[19803]: I0313 01:41:38.933275 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/machine-api-operator/1.log" Mar 13 01:41:39.557930 master-0 kubenswrapper[19803]: I0313 01:41:39.557854 19803 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c"] Mar 13 01:41:39.559846 master-0 kubenswrapper[19803]: I0313 01:41:39.559013 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.595537 master-0 kubenswrapper[19803]: I0313 01:41:39.580014 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c"] Mar 13 01:41:39.595537 master-0 kubenswrapper[19803]: I0313 01:41:39.587971 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-podres\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.595537 master-0 kubenswrapper[19803]: I0313 01:41:39.588078 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-proc\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.595537 master-0 kubenswrapper[19803]: I0313 01:41:39.588116 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs8p\" (UniqueName: \"kubernetes.io/projected/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-kube-api-access-wfs8p\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.595537 master-0 kubenswrapper[19803]: I0313 01:41:39.588193 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-lib-modules\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.595537 master-0 kubenswrapper[19803]: I0313 01:41:39.588239 19803 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-sys\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.689751 master-0 kubenswrapper[19803]: I0313 01:41:39.689692 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-proc\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.689751 master-0 kubenswrapper[19803]: I0313 01:41:39.689755 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wfs8p\" (UniqueName: \"kubernetes.io/projected/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-kube-api-access-wfs8p\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.689989 master-0 kubenswrapper[19803]: I0313 01:41:39.689803 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-lib-modules\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.689989 master-0 kubenswrapper[19803]: I0313 01:41:39.689835 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-sys\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.689989 master-0 kubenswrapper[19803]: I0313 01:41:39.689850 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proc\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-proc\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.689989 master-0 kubenswrapper[19803]: I0313 01:41:39.689875 19803 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-podres\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.689989 master-0 kubenswrapper[19803]: I0313 01:41:39.689946 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-sys\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.690140 master-0 kubenswrapper[19803]: I0313 01:41:39.690021 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"podres\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-podres\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.690140 master-0 kubenswrapper[19803]: I0313 01:41:39.690099 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-lib-modules\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.713206 master-0 kubenswrapper[19803]: I0313 01:41:39.713157 19803 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wfs8p\" (UniqueName: \"kubernetes.io/projected/62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083-kube-api-access-wfs8p\") pod \"perf-node-gather-daemonset-6tg7c\" (UID: \"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083\") " pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:39.899434 master-0 kubenswrapper[19803]: I0313 01:41:39.899281 19803 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:40.276166 master-0 kubenswrapper[19803]: I0313 01:41:40.276083 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/cluster-cloud-controller-manager/0.log" Mar 13 01:41:40.276385 master-0 kubenswrapper[19803]: I0313 01:41:40.276087 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/cluster-cloud-controller-manager/1.log" Mar 13 01:41:40.299627 master-0 kubenswrapper[19803]: I0313 01:41:40.299056 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/config-sync-controllers/0.log" Mar 13 01:41:40.303823 master-0 kubenswrapper[19803]: I0313 01:41:40.303764 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/config-sync-controllers/1.log" Mar 13 01:41:40.325259 master-0 kubenswrapper[19803]: I0313 01:41:40.324233 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-controller-manager-operator_cluster-cloud-controller-manager-operator-7c8df9b496-rdbn8_80eb89dc-ccfc-4360-811a-82a3ef6f7b65/kube-rbac-proxy/0.log" Mar 13 01:41:40.484878 master-0 kubenswrapper[19803]: I0313 01:41:40.484804 19803 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c"] Mar 13 01:41:41.150383 master-0 kubenswrapper[19803]: I0313 01:41:41.150258 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" event={"ID":"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083","Type":"ContainerStarted","Data":"8849298407e66dc200a13e27a5395952928325dd71cca33338e4180e2d691cf3"} Mar 13 01:41:41.150383 master-0 kubenswrapper[19803]: I0313 01:41:41.150325 19803 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" event={"ID":"62e43e2d-fe9c-43a4-ac4b-e9ec0c90b083","Type":"ContainerStarted","Data":"c1f6200944b92277ce402ef57dde893c66846241659c6e3fd538fcfbd19dc01c"} Mar 13 01:41:41.150383 master-0 kubenswrapper[19803]: I0313 01:41:41.150356 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:41.170793 master-0 kubenswrapper[19803]: I0313 01:41:41.170705 19803 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" podStartSLOduration=2.170688621 podStartE2EDuration="2.170688621s" podCreationTimestamp="2026-03-13 01:41:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-03-13 01:41:41.16733261 +0000 UTC m=+1449.132480299" watchObservedRunningTime="2026-03-13 01:41:41.170688621 +0000 UTC m=+1449.135836300" Mar 13 01:41:41.698656 master-0 kubenswrapper[19803]: I0313 01:41:41.698581 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s_65dd1dc7-1b90-40f6-82c9-dee90a1fa852/kube-rbac-proxy/0.log" Mar 13 01:41:41.732872 master-0 kubenswrapper[19803]: I0313 01:41:41.732808 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cloud-credential-operator_cloud-credential-operator-55d85b7b47-b4w7s_65dd1dc7-1b90-40f6-82c9-dee90a1fa852/cloud-credential-operator/0.log" Mar 13 01:41:43.334287 master-0 kubenswrapper[19803]: I0313 01:41:43.334225 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/3.log" Mar 13 01:41:43.335064 master-0 kubenswrapper[19803]: I0313 01:41:43.335003 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-config-operator/4.log" Mar 13 01:41:43.347109 master-0 kubenswrapper[19803]: I0313 01:41:43.347056 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-config-operator_openshift-config-operator-64488f9d78-trr9r_6fd82994-f4d4-49e9-8742-07e206322e76/openshift-api/0.log" Mar 13 01:41:43.706682 master-0 kubenswrapper[19803]: I0313 01:41:43.706550 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-lzxmk_5a79b54b-b4c6-4a23-8818-6ee030e13899/controller/0.log" Mar 13 01:41:43.711262 master-0 kubenswrapper[19803]: I0313 01:41:43.711229 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-lzxmk_5a79b54b-b4c6-4a23-8818-6ee030e13899/kube-rbac-proxy/0.log" Mar 13 01:41:43.732569 master-0 kubenswrapper[19803]: I0313 01:41:43.732487 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/controller/0.log" Mar 13 01:41:43.772527 master-0 kubenswrapper[19803]: I0313 01:41:43.772459 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/frr/0.log" Mar 13 01:41:43.784538 master-0 kubenswrapper[19803]: I0313 01:41:43.781969 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/reloader/0.log" Mar 13 01:41:43.790523 master-0 kubenswrapper[19803]: I0313 01:41:43.790482 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/frr-metrics/0.log" Mar 13 01:41:43.799378 master-0 kubenswrapper[19803]: I0313 01:41:43.799345 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/kube-rbac-proxy/0.log" Mar 13 01:41:43.807668 master-0 kubenswrapper[19803]: I0313 01:41:43.807629 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/kube-rbac-proxy-frr/0.log" Mar 13 01:41:43.815824 master-0 kubenswrapper[19803]: I0313 01:41:43.815779 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-frr-files/0.log" Mar 13 01:41:43.826020 master-0 kubenswrapper[19803]: I0313 01:41:43.825976 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-reloader/0.log" Mar 13 01:41:43.840041 master-0 kubenswrapper[19803]: I0313 01:41:43.839994 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-metrics/0.log" Mar 13 01:41:43.851203 master-0 kubenswrapper[19803]: I0313 01:41:43.851158 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-gz7lk_26566375-fda5-4fbb-8e37-4901c404589e/frr-k8s-webhook-server/0.log" Mar 13 01:41:43.872629 master-0 kubenswrapper[19803]: I0313 01:41:43.872562 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6984bbdf9-qw42j_974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6/manager/0.log" Mar 13 01:41:43.889633 master-0 kubenswrapper[19803]: I0313 01:41:43.889577 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c89d777d4-h7xf2_36c14515-2f07-46ad-a5cd-1e81ccb8506e/webhook-server/0.log" Mar 13 01:41:43.940157 master-0 kubenswrapper[19803]: I0313 01:41:43.940095 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p9lv_a06d84dd-5485-4043-bd8d-332d3bb99fa3/speaker/0.log" Mar 13 01:41:43.995238 master-0 kubenswrapper[19803]: I0313 01:41:43.995189 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p9lv_a06d84dd-5485-4043-bd8d-332d3bb99fa3/kube-rbac-proxy/0.log" Mar 13 01:41:44.231427 master-0 kubenswrapper[19803]: I0313 01:41:44.231381 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console-operator_console-operator-6c7fb6b958-4cbn4_a1d1a41c-8533-4854-abea-ed42c4d7c71f/console-operator/0.log" Mar 13 01:41:44.797962 master-0 kubenswrapper[19803]: I0313 01:41:44.797903 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-7887658d99-sfwrp_6a8a3b62-6dfb-432e-80a6-7bb0c7f47976/console/0.log" Mar 13 01:41:44.828731 master-0 kubenswrapper[19803]: I0313 01:41:44.828680 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_downloads-84f57b9877-ffb2n_cd044580-0236-4ee8-9a26-b8513e400238/download-server/0.log" Mar 13 01:41:45.512963 master-0 kubenswrapper[19803]: I0313 01:41:45.512903 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_cluster-storage-operator-6fbfc8dc8f-h9mwm_65ef9aae-25a5-46c6-adf3-634f8f7a29bc/cluster-storage-operator/0.log" Mar 13 01:41:45.528628 master-0 kubenswrapper[19803]: I0313 01:41:45.528578 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/4.log" Mar 13 01:41:45.529288 master-0 kubenswrapper[19803]: I0313 01:41:45.529227 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-7577d6f48-bj5ld_0cc21ef9-a7c9-4154-811d-3cfff8ff3e1a/snapshot-controller/5.log" Mar 13 01:41:45.553904 master-0 kubenswrapper[19803]: I0313 01:41:45.553850 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-storage-operator_csi-snapshot-controller-operator-5685fbc7d-478l8_d163333f-fda5-4067-ad7c-6f646ae411c8/csi-snapshot-controller-operator/1.log" Mar 13 01:41:46.248156 master-0 kubenswrapper[19803]: I0313 01:41:46.248119 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-wb6qq_7d874a21-43aa-4d81-b904-853fb3da5a94/dns-operator/0.log" Mar 13 01:41:46.268122 master-0 kubenswrapper[19803]: I0313 01:41:46.268063 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns-operator_dns-operator-589895fbb7-wb6qq_7d874a21-43aa-4d81-b904-853fb3da5a94/kube-rbac-proxy/0.log" Mar 13 01:41:46.828930 master-0 kubenswrapper[19803]: I0313 01:41:46.828875 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-8tblf_f63f49fb-5b13-4480-9278-9aca536f856c/registry-server/0.log" Mar 13 01:41:46.834052 master-0 kubenswrapper[19803]: I0313 01:41:46.833966 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pfsjd_95c7493b-ad9d-490e-83f3-aa28750b2b5e/dns/0.log" Mar 13 01:41:46.850832 master-0 kubenswrapper[19803]: I0313 01:41:46.850786 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_dns-default-pfsjd_95c7493b-ad9d-490e-83f3-aa28750b2b5e/kube-rbac-proxy/0.log" Mar 13 01:41:46.882851 master-0 kubenswrapper[19803]: I0313 01:41:46.882741 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-dns_node-resolver-xmwg6_bd264af8-4ced-40c4-b4f6-202bab42d0cb/dns-node-resolver/0.log" Mar 13 01:41:47.676364 master-0 kubenswrapper[19803]: I0313 01:41:47.676291 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-8r87t_77e6cd9e-b6ef-491c-a5c3-60dab81fd752/etcd-operator/4.log" Mar 13 01:41:47.678081 master-0 kubenswrapper[19803]: I0313 01:41:47.678034 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd-operator_etcd-operator-5884b9cd56-8r87t_77e6cd9e-b6ef-491c-a5c3-60dab81fd752/etcd-operator/3.log" Mar 13 01:41:48.306182 master-0 kubenswrapper[19803]: I0313 01:41:48.306111 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcdctl/0.log" Mar 13 01:41:48.410909 master-0 kubenswrapper[19803]: I0313 01:41:48.410826 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd/0.log" Mar 13 01:41:48.429777 master-0 kubenswrapper[19803]: I0313 01:41:48.429713 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-metrics/0.log" Mar 13 01:41:48.443181 master-0 kubenswrapper[19803]: I0313 01:41:48.443093 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-readyz/0.log" Mar 13 01:41:48.463491 master-0 kubenswrapper[19803]: I0313 01:41:48.463389 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-rev/0.log" Mar 13 01:41:48.481618 master-0 kubenswrapper[19803]: I0313 01:41:48.481554 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/setup/0.log" Mar 13 01:41:48.496063 master-0 kubenswrapper[19803]: I0313 01:41:48.496006 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-ensure-env-vars/0.log" Mar 13 01:41:48.518365 master-0 kubenswrapper[19803]: I0313 01:41:48.518269 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_etcd-master-0_29c709c82970b529e7b9b895aa92ef05/etcd-resources-copy/0.log" Mar 13 01:41:48.584777 master-0 kubenswrapper[19803]: I0313 01:41:48.584664 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-1-master-0_dfb4407e-71fc-4684-aded-cc84f7e306dc/installer/0.log" Mar 13 01:41:48.631199 master-0 kubenswrapper[19803]: I0313 01:41:48.631123 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-etcd_installer-2-master-0_dd3a989f-6c19-4f5d-b14f-369ed9941051/installer/0.log" Mar 13 01:41:49.393692 master-0 kubenswrapper[19803]: I0313 01:41:49.393593 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-6vvzl_91fc568a-61ad-400e-a54e-21d62e51bb17/cluster-image-registry-operator/1.log" Mar 13 01:41:49.398285 master-0 kubenswrapper[19803]: I0313 01:41:49.398240 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_cluster-image-registry-operator-86d6d77c7c-6vvzl_91fc568a-61ad-400e-a54e-21d62e51bb17/cluster-image-registry-operator/0.log" Mar 13 01:41:49.420895 master-0 kubenswrapper[19803]: I0313 01:41:49.420830 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-image-registry_node-ca-r6jcs_0a557547-de25-4165-a4f5-370b54cd7f70/node-ca/0.log" Mar 13 01:41:49.927346 master-0 kubenswrapper[19803]: I0313 01:41:49.927258 19803 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-must-gather-wkgf5/perf-node-gather-daemonset-6tg7c" Mar 13 01:41:50.035691 master-0 kubenswrapper[19803]: I0313 01:41:50.035633 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-p5c8r_75a53c09-210a-4346-99b0-a632b9e0a3c9/ingress-operator/1.log" Mar 13 01:41:50.037795 master-0 kubenswrapper[19803]: I0313 01:41:50.037775 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-p5c8r_75a53c09-210a-4346-99b0-a632b9e0a3c9/ingress-operator/0.log" Mar 13 01:41:50.048329 master-0 kubenswrapper[19803]: I0313 01:41:50.048280 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-operator_ingress-operator-677db989d6-p5c8r_75a53c09-210a-4346-99b0-a632b9e0a3c9/kube-rbac-proxy/0.log" Mar 13 01:41:50.730493 master-0 kubenswrapper[19803]: I0313 01:41:50.730428 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ingress-canary_ingress-canary-vp9bn_ebf60543-fd92-4826-a16e-7e1ebfd95089/serve-healthcheck-canary/0.log" Mar 13 01:41:51.318348 master-0 kubenswrapper[19803]: I0313 01:41:51.318280 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-insights_insights-operator-8f89dfddd-hn4jh_6e799871-735a-44e8-8193-24c5bb388928/insights-operator/0.log" Mar 13 01:41:52.886282 master-0 kubenswrapper[19803]: I0313 01:41:52.886185 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_9f797fd7-03a8-4b62-82c2-2015dd076114/alertmanager/0.log" Mar 13 01:41:52.904480 master-0 kubenswrapper[19803]: I0313 01:41:52.904313 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_9f797fd7-03a8-4b62-82c2-2015dd076114/config-reloader/0.log" Mar 13 01:41:52.924416 master-0 kubenswrapper[19803]: I0313 01:41:52.924382 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_9f797fd7-03a8-4b62-82c2-2015dd076114/kube-rbac-proxy-web/0.log" Mar 13 01:41:52.941213 master-0 kubenswrapper[19803]: I0313 01:41:52.941172 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_9f797fd7-03a8-4b62-82c2-2015dd076114/kube-rbac-proxy/0.log" Mar 13 01:41:52.961721 master-0 kubenswrapper[19803]: I0313 01:41:52.961628 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_9f797fd7-03a8-4b62-82c2-2015dd076114/kube-rbac-proxy-metric/0.log" Mar 13 01:41:52.985357 master-0 kubenswrapper[19803]: I0313 01:41:52.985262 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_9f797fd7-03a8-4b62-82c2-2015dd076114/prom-label-proxy/0.log" Mar 13 01:41:53.018015 master-0 kubenswrapper[19803]: I0313 01:41:53.017941 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_alertmanager-main-0_9f797fd7-03a8-4b62-82c2-2015dd076114/init-config-reloader/0.log" Mar 13 01:41:53.075969 master-0 kubenswrapper[19803]: I0313 01:41:53.075891 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_cluster-monitoring-operator-674cbfbd9d-75jj7_46015913-c499-49b1-a9f6-a61c6e96b13f/cluster-monitoring-operator/0.log" Mar 13 01:41:53.098914 master-0 kubenswrapper[19803]: I0313 01:41:53.098838 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-6w4pf_1ef69514-736d-44ba-a5e9-703bd06d52a8/kube-state-metrics/0.log" Mar 13 01:41:53.120495 master-0 kubenswrapper[19803]: I0313 01:41:53.119969 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-6w4pf_1ef69514-736d-44ba-a5e9-703bd06d52a8/kube-rbac-proxy-main/0.log" Mar 13 01:41:53.141301 master-0 kubenswrapper[19803]: I0313 01:41:53.141150 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_kube-state-metrics-68b88f8cb5-6w4pf_1ef69514-736d-44ba-a5e9-703bd06d52a8/kube-rbac-proxy-self/0.log" Mar 13 01:41:53.168139 master-0 kubenswrapper[19803]: I0313 01:41:53.168086 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_metrics-server-8d4f75c74-k5jnm_5f8427fc-c594-4f19-9ef4-af196da1166e/metrics-server/0.log" Mar 13 01:41:53.192326 master-0 kubenswrapper[19803]: I0313 01:41:53.192181 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_monitoring-plugin-6d885bb797-8nsd8_69519a11-aa5e-40e5-a655-992d32ef8150/monitoring-plugin/0.log" Mar 13 01:41:53.224006 master-0 kubenswrapper[19803]: I0313 01:41:53.223952 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-85xmz_d948e0c5-a593-4fe0-bc58-8f157cd5ae1b/node-exporter/0.log" Mar 13 01:41:53.239152 master-0 kubenswrapper[19803]: I0313 01:41:53.239085 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-85xmz_d948e0c5-a593-4fe0-bc58-8f157cd5ae1b/kube-rbac-proxy/0.log" Mar 13 01:41:53.255751 master-0 kubenswrapper[19803]: I0313 01:41:53.255690 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_node-exporter-85xmz_d948e0c5-a593-4fe0-bc58-8f157cd5ae1b/init-textfile/0.log" Mar 13 01:41:53.282705 master-0 kubenswrapper[19803]: I0313 01:41:53.282642 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-fpgj5_5e147d06-d872-4691-95f8-b9d8b7584780/kube-rbac-proxy-main/0.log" Mar 13 01:41:53.299598 master-0 kubenswrapper[19803]: I0313 01:41:53.299547 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-fpgj5_5e147d06-d872-4691-95f8-b9d8b7584780/kube-rbac-proxy-self/0.log" Mar 13 01:41:53.324731 master-0 kubenswrapper[19803]: I0313 01:41:53.324622 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_openshift-state-metrics-74cc79fd76-fpgj5_5e147d06-d872-4691-95f8-b9d8b7584780/openshift-state-metrics/0.log" Mar 13 01:41:53.375761 master-0 kubenswrapper[19803]: I0313 01:41:53.375704 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_83d4214e-5ca9-401d-bd0c-860f02034a10/prometheus/0.log" Mar 13 01:41:53.391110 master-0 kubenswrapper[19803]: I0313 01:41:53.391040 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_83d4214e-5ca9-401d-bd0c-860f02034a10/config-reloader/0.log" Mar 13 01:41:53.412838 master-0 kubenswrapper[19803]: I0313 01:41:53.412405 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_83d4214e-5ca9-401d-bd0c-860f02034a10/thanos-sidecar/0.log" Mar 13 01:41:53.432673 master-0 kubenswrapper[19803]: I0313 01:41:53.432630 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_83d4214e-5ca9-401d-bd0c-860f02034a10/kube-rbac-proxy-web/0.log" Mar 13 01:41:53.447375 master-0 kubenswrapper[19803]: I0313 01:41:53.447333 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_83d4214e-5ca9-401d-bd0c-860f02034a10/kube-rbac-proxy/0.log" Mar 13 01:41:53.459655 master-0 kubenswrapper[19803]: I0313 01:41:53.459599 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_83d4214e-5ca9-401d-bd0c-860f02034a10/kube-rbac-proxy-thanos/0.log" Mar 13 01:41:53.479105 master-0 kubenswrapper[19803]: I0313 01:41:53.479040 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-k8s-0_83d4214e-5ca9-401d-bd0c-860f02034a10/init-config-reloader/0.log" Mar 13 01:41:53.504545 master-0 kubenswrapper[19803]: I0313 01:41:53.504447 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-tqxdr_6b5aa4fd-67eb-4d3b-a06e-90afa825eb41/prometheus-operator/0.log" Mar 13 01:41:53.518118 master-0 kubenswrapper[19803]: I0313 01:41:53.518016 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-5ff8674d55-tqxdr_6b5aa4fd-67eb-4d3b-a06e-90afa825eb41/kube-rbac-proxy/0.log" Mar 13 01:41:53.536280 master-0 kubenswrapper[19803]: I0313 01:41:53.536225 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_prometheus-operator-admission-webhook-8464df8497-rhk4l_0ff72b58-aca9-46f1-86ca-da8339734ac9/prometheus-operator-admission-webhook/0.log" Mar 13 01:41:53.569778 master-0 kubenswrapper[19803]: I0313 01:41:53.569688 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7d955bd7d-xxddg_da8d30f5-9351-4865-9a0c-a5aae2118684/telemeter-client/0.log" Mar 13 01:41:53.593446 master-0 kubenswrapper[19803]: I0313 01:41:53.593337 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7d955bd7d-xxddg_da8d30f5-9351-4865-9a0c-a5aae2118684/reload/0.log" Mar 13 01:41:53.619628 master-0 kubenswrapper[19803]: I0313 01:41:53.619286 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_telemeter-client-7d955bd7d-xxddg_da8d30f5-9351-4865-9a0c-a5aae2118684/kube-rbac-proxy/0.log" Mar 13 01:41:53.647264 master-0 kubenswrapper[19803]: I0313 01:41:53.647218 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5dc6c54498-5n2tv_77b804a1-c0fb-42d6-bdea-b879db3eb94c/thanos-query/0.log" Mar 13 01:41:53.657256 master-0 kubenswrapper[19803]: I0313 01:41:53.657204 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5dc6c54498-5n2tv_77b804a1-c0fb-42d6-bdea-b879db3eb94c/kube-rbac-proxy-web/0.log" Mar 13 01:41:53.671478 master-0 kubenswrapper[19803]: I0313 01:41:53.671359 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5dc6c54498-5n2tv_77b804a1-c0fb-42d6-bdea-b879db3eb94c/kube-rbac-proxy/0.log" Mar 13 01:41:53.706461 master-0 kubenswrapper[19803]: I0313 01:41:53.706332 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5dc6c54498-5n2tv_77b804a1-c0fb-42d6-bdea-b879db3eb94c/prom-label-proxy/0.log" Mar 13 01:41:53.720982 master-0 kubenswrapper[19803]: I0313 01:41:53.719888 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5dc6c54498-5n2tv_77b804a1-c0fb-42d6-bdea-b879db3eb94c/kube-rbac-proxy-rules/0.log" Mar 13 01:41:53.744865 master-0 kubenswrapper[19803]: I0313 01:41:53.744818 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-monitoring_thanos-querier-5dc6c54498-5n2tv_77b804a1-c0fb-42d6-bdea-b879db3eb94c/kube-rbac-proxy-metrics/0.log" Mar 13 01:41:54.150341 master-0 kubenswrapper[19803]: I0313 01:41:54.150288 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9_2581e5b5-8cbb-4fa5-9888-98fb572a6232/kube-rbac-proxy/0.log" Mar 13 01:41:54.181394 master-0 kubenswrapper[19803]: I0313 01:41:54.181305 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-autoscaler-operator-69576476f7-lrmx9_2581e5b5-8cbb-4fa5-9888-98fb572a6232/cluster-autoscaler-operator/0.log" Mar 13 01:41:54.196341 master-0 kubenswrapper[19803]: I0313 01:41:54.196217 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/2.log" Mar 13 01:41:54.196996 master-0 kubenswrapper[19803]: I0313 01:41:54.196975 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/cluster-baremetal-operator/3.log" Mar 13 01:41:54.204047 master-0 kubenswrapper[19803]: I0313 01:41:54.204021 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_cluster-baremetal-operator-5cdb4c5598-5dvnt_21110b48-25fc-434a-b156-7f6bd6064bed/baremetal-kube-rbac-proxy/0.log" Mar 13 01:41:54.214929 master-0 kubenswrapper[19803]: I0313 01:41:54.214881 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6_56e20b21-ba17-46ae-a740-0e7bd45eae5f/control-plane-machine-set-operator/1.log" Mar 13 01:41:54.215132 master-0 kubenswrapper[19803]: I0313 01:41:54.215084 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-6686554ddc-pmrq6_56e20b21-ba17-46ae-a740-0e7bd45eae5f/control-plane-machine-set-operator/0.log" Mar 13 01:41:54.227846 master-0 kubenswrapper[19803]: I0313 01:41:54.227807 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/kube-rbac-proxy/0.log" Mar 13 01:41:54.235546 master-0 kubenswrapper[19803]: I0313 01:41:54.235144 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/machine-api-operator/0.log" Mar 13 01:41:54.240876 master-0 kubenswrapper[19803]: I0313 01:41:54.240823 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-84bf6db4f9-rpjkb_2760a216-fd4b-46d9-a4ec-2d3285ec02bd/machine-api-operator/1.log" Mar 13 01:41:55.373388 master-0 kubenswrapper[19803]: I0313 01:41:55.373313 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-lzxmk_5a79b54b-b4c6-4a23-8818-6ee030e13899/controller/0.log" Mar 13 01:41:55.389181 master-0 kubenswrapper[19803]: I0313 01:41:55.389094 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-7bb4cc7c98-lzxmk_5a79b54b-b4c6-4a23-8818-6ee030e13899/kube-rbac-proxy/0.log" Mar 13 01:41:55.408983 master-0 kubenswrapper[19803]: I0313 01:41:55.408930 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/controller/0.log" Mar 13 01:41:55.479565 master-0 kubenswrapper[19803]: I0313 01:41:55.477227 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/frr/0.log" Mar 13 01:41:55.499139 master-0 kubenswrapper[19803]: I0313 01:41:55.499072 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/reloader/0.log" Mar 13 01:41:55.515400 master-0 kubenswrapper[19803]: I0313 01:41:55.515346 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/frr-metrics/0.log" Mar 13 01:41:55.529673 master-0 kubenswrapper[19803]: I0313 01:41:55.529620 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/kube-rbac-proxy/0.log" Mar 13 01:41:55.545965 master-0 kubenswrapper[19803]: I0313 01:41:55.545916 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/kube-rbac-proxy-frr/0.log" Mar 13 01:41:55.564493 master-0 kubenswrapper[19803]: I0313 01:41:55.564433 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-frr-files/0.log" Mar 13 01:41:55.581922 master-0 kubenswrapper[19803]: I0313 01:41:55.581870 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-reloader/0.log" Mar 13 01:41:55.597757 master-0 kubenswrapper[19803]: I0313 01:41:55.597693 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-dc7jx_5555aed3-8836-40c7-a55a-ff3708f816e5/cp-metrics/0.log" Mar 13 01:41:55.622034 master-0 kubenswrapper[19803]: I0313 01:41:55.621968 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-bcc4b6f68-gz7lk_26566375-fda5-4fbb-8e37-4901c404589e/frr-k8s-webhook-server/0.log" Mar 13 01:41:55.660661 master-0 kubenswrapper[19803]: I0313 01:41:55.660492 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6984bbdf9-qw42j_974dae1a-357e-46aa-9e2c-5dd1e9d1ffd6/manager/0.log" Mar 13 01:41:55.679419 master-0 kubenswrapper[19803]: I0313 01:41:55.679353 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-6c89d777d4-h7xf2_36c14515-2f07-46ad-a5cd-1e81ccb8506e/webhook-server/0.log" Mar 13 01:41:55.782352 master-0 kubenswrapper[19803]: I0313 01:41:55.782293 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p9lv_a06d84dd-5485-4043-bd8d-332d3bb99fa3/speaker/0.log" Mar 13 01:41:55.810063 master-0 kubenswrapper[19803]: I0313 01:41:55.807350 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-7p9lv_a06d84dd-5485-4043-bd8d-332d3bb99fa3/kube-rbac-proxy/0.log" Mar 13 01:41:57.326770 master-0 kubenswrapper[19803]: I0313 01:41:57.326683 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-wk89g_8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/cluster-node-tuning-operator/1.log" Mar 13 01:41:57.329890 master-0 kubenswrapper[19803]: I0313 01:41:57.329852 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_cluster-node-tuning-operator-66c7586884-wk89g_8c4fe9eb-4544-44a1-9f7e-263aef1a7cc7/cluster-node-tuning-operator/0.log" Mar 13 01:41:57.354286 master-0 kubenswrapper[19803]: I0313 01:41:57.354227 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-cluster-node-tuning-operator_tuned-p9mnd_b74de987-7962-425e-9447-24b285eb888f/tuned/0.log" Mar 13 01:41:58.813836 master-0 kubenswrapper[19803]: I0313 01:41:58.813755 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-g8gj5_fde89b0b-7133-4b97-9e35-51c0382bd366/kube-apiserver-operator/1.log" Mar 13 01:41:58.847343 master-0 kubenswrapper[19803]: I0313 01:41:58.847265 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver-operator_kube-apiserver-operator-68bd585b-g8gj5_fde89b0b-7133-4b97-9e35-51c0382bd366/kube-apiserver-operator/0.log" Mar 13 01:41:59.470945 master-0 kubenswrapper[19803]: I0313 01:41:59.470814 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-1-master-0_fdcd8438-d33f-490f-a841-8944c58506f8/installer/0.log" Mar 13 01:41:59.494410 master-0 kubenswrapper[19803]: I0313 01:41:59.494363 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_installer-3-master-0_6481abb4-a276-4bf1-b16b-271e2ce7936e/installer/0.log" Mar 13 01:41:59.629554 master-0 kubenswrapper[19803]: I0313 01:41:59.629481 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver/0.log" Mar 13 01:41:59.642057 master-0 kubenswrapper[19803]: I0313 01:41:59.642007 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-syncer/0.log" Mar 13 01:41:59.656878 master-0 kubenswrapper[19803]: I0313 01:41:59.656822 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-cert-regeneration-controller/0.log" Mar 13 01:41:59.670040 master-0 kubenswrapper[19803]: I0313 01:41:59.669971 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-insecure-readyz/0.log" Mar 13 01:41:59.688243 master-0 kubenswrapper[19803]: I0313 01:41:59.688189 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/kube-apiserver-check-endpoints/0.log" Mar 13 01:41:59.698201 master-0 kubenswrapper[19803]: I0313 01:41:59.698154 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-master-0_077dd10388b9e3e48a07382126e86621/setup/0.log" Mar 13 01:42:00.392482 master-0 kubenswrapper[19803]: I0313 01:42:00.392431 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/kube-rbac-proxy/0.log" Mar 13 01:42:00.408298 master-0 kubenswrapper[19803]: I0313 01:42:00.408044 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/2.log" Mar 13 01:42:00.411816 master-0 kubenswrapper[19803]: I0313 01:42:00.411789 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-catalogd_catalogd-controller-manager-7f8b8b6f4c-z4qvz_81835d51-a414-440f-889b-690561e98d6a/manager/1.log" Mar 13 01:42:00.908672 master-0 kubenswrapper[19803]: I0313 01:42:00.908615 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-56pfb_2699d1bb-8aa6-4f12-b578-93e566b6340d/cert-manager-controller/0.log" Mar 13 01:42:00.930019 master-0 kubenswrapper[19803]: I0313 01:42:00.929967 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-r9l6s_895bac03-aaa0-46e5-a41f-ba1f2b6c5793/cert-manager-cainjector/0.log" Mar 13 01:42:00.947410 master-0 kubenswrapper[19803]: I0313 01:42:00.947351 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-5rwl7_9db962b8-555d-43be-8bc0-91bd58d8a9cc/cert-manager-webhook/0.log" Mar 13 01:42:01.053241 master-0 kubenswrapper[19803]: I0313 01:42:01.053203 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-545d4d4674-56pfb_2699d1bb-8aa6-4f12-b578-93e566b6340d/cert-manager-controller/0.log" Mar 13 01:42:01.073312 master-0 kubenswrapper[19803]: I0313 01:42:01.073261 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-5545bd876-r9l6s_895bac03-aaa0-46e5-a41f-ba1f2b6c5793/cert-manager-cainjector/0.log" Mar 13 01:42:01.084135 master-0 kubenswrapper[19803]: I0313 01:42:01.084096 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-6888856db4-5rwl7_9db962b8-555d-43be-8bc0-91bd58d8a9cc/cert-manager-webhook/0.log" Mar 13 01:42:01.531832 master-0 kubenswrapper[19803]: I0313 01:42:01.531778 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-xjq7n_72706d51-8596-4a52-88bd-d994a8baad33/nmstate-console-plugin/0.log" Mar 13 01:42:01.549672 master-0 kubenswrapper[19803]: I0313 01:42:01.549634 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xc24l_1da4232b-d161-4e9d-9e52-0c4663080dfd/nmstate-handler/0.log" Mar 13 01:42:01.568460 master-0 kubenswrapper[19803]: I0313 01:42:01.568388 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-tvqv9_1f9d5bff-035e-4b19-946a-c8c49fd43ebb/nmstate-metrics/0.log" Mar 13 01:42:01.581294 master-0 kubenswrapper[19803]: I0313 01:42:01.581262 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-tvqv9_1f9d5bff-035e-4b19-946a-c8c49fd43ebb/kube-rbac-proxy/0.log" Mar 13 01:42:01.614931 master-0 kubenswrapper[19803]: I0313 01:42:01.614885 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-rjb7j_c7c96cc6-98a5-467b-aed4-c50790caa51e/nmstate-operator/0.log" Mar 13 01:42:01.635095 master-0 kubenswrapper[19803]: I0313 01:42:01.635038 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-8f7rs_b64722e0-860a-4f39-bca0-51cae9911bc0/nmstate-webhook/0.log" Mar 13 01:42:02.243603 master-0 kubenswrapper[19803]: I0313 01:42:02.243486 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-pqxsw_e79a1bba-9fe7-4f9f-ad48-bb3910e54bff/prometheus-operator/0.log" Mar 13 01:42:02.262654 master-0 kubenswrapper[19803]: I0313 01:42:02.262600 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85989999bf-4d62m_5db336c1-1122-4d84-82d3-84594c981aa8/prometheus-operator-admission-webhook/0.log" Mar 13 01:42:02.283464 master-0 kubenswrapper[19803]: I0313 01:42:02.282123 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-85989999bf-km66w_04810dc8-d0d3-4b51-961d-a994763bae58/prometheus-operator-admission-webhook/0.log" Mar 13 01:42:02.311038 master-0 kubenswrapper[19803]: I0313 01:42:02.310999 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-fcc84_c3976e1d-6751-403b-b831-967f80ef904d/operator/0.log" Mar 13 01:42:02.330084 master-0 kubenswrapper[19803]: I0313 01:42:02.329799 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-pxl6w_fd1e670f-7667-46fc-8213-340c0479c901/perses-operator/0.log" Mar 13 01:42:02.909744 master-0 kubenswrapper[19803]: I0313 01:42:02.909698 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-mjh5s_f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/kube-multus-additional-cni-plugins/0.log" Mar 13 01:42:02.922531 master-0 kubenswrapper[19803]: I0313 01:42:02.922484 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-mjh5s_f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/egress-router-binary-copy/0.log" Mar 13 01:42:02.936557 master-0 kubenswrapper[19803]: I0313 01:42:02.936498 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-mjh5s_f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/cni-plugins/0.log" Mar 13 01:42:02.948637 master-0 kubenswrapper[19803]: I0313 01:42:02.948603 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-mjh5s_f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/bond-cni-plugin/0.log" Mar 13 01:42:02.959183 master-0 kubenswrapper[19803]: I0313 01:42:02.959141 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-mjh5s_f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/routeoverride-cni/0.log" Mar 13 01:42:02.970375 master-0 kubenswrapper[19803]: I0313 01:42:02.970315 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-mjh5s_f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/whereabouts-cni-bincopy/0.log" Mar 13 01:42:02.983377 master-0 kubenswrapper[19803]: I0313 01:42:02.983316 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-mjh5s_f91b91e8-6d3d-42b9-a158-b22a5a0cc7fd/whereabouts-cni/0.log" Mar 13 01:42:02.999197 master-0 kubenswrapper[19803]: I0313 01:42:02.999147 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-56bbfd46b8-fb5cr_41ab5042-7d9a-4b2d-b00b-cd5159313262/multus-admission-controller/0.log" Mar 13 01:42:03.017286 master-0 kubenswrapper[19803]: I0313 01:42:03.017222 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-56bbfd46b8-fb5cr_41ab5042-7d9a-4b2d-b00b-cd5159313262/kube-rbac-proxy/0.log" Mar 13 01:42:03.123053 master-0 kubenswrapper[19803]: I0313 01:42:03.122946 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-xk75p_de46c12a-aa3e-442e-bcc4-365d05f50103/kube-multus/0.log" Mar 13 01:42:03.155725 master-0 kubenswrapper[19803]: I0313 01:42:03.155676 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-9hwz9_9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/network-metrics-daemon/0.log" Mar 13 01:42:03.168214 master-0 kubenswrapper[19803]: I0313 01:42:03.168113 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-9hwz9_9d511454-e7cf-4e9b-9d99-86d7e7aeaf4d/kube-rbac-proxy/0.log" Mar 13 01:42:03.678404 master-0 kubenswrapper[19803]: I0313 01:42:03.678355 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_lvms-operator-5855d99796-5p89t_94d3f6ed-df21-4254-80b2-4d07bb71930e/manager/0.log" Mar 13 01:42:03.698078 master-0 kubenswrapper[19803]: I0313 01:42:03.698024 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-n4wbn_df402717-11fa-4f28-96a2-beecc3c5ccc4/vg-manager/1.log" Mar 13 01:42:03.699403 master-0 kubenswrapper[19803]: I0313 01:42:03.699379 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-storage_vg-manager-n4wbn_df402717-11fa-4f28-96a2-beecc3c5ccc4/vg-manager/0.log" Mar 13 01:42:04.277724 master-0 kubenswrapper[19803]: I0313 01:42:04.277672 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-2-master-0_7106c6fe-7c8d-45b9-bc5c-521db743663f/installer/0.log" Mar 13 01:42:04.298811 master-0 kubenswrapper[19803]: I0313 01:42:04.297867 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_installer-4-master-0_943a993e-2a88-4bda-832f-d03e9d2d08d8/installer/0.log" Mar 13 01:42:04.487797 master-0 kubenswrapper[19803]: I0313 01:42:04.487716 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_961b4d54fbc741f185dfae043b7eaea5/kube-controller-manager/0.log" Mar 13 01:42:04.525252 master-0 kubenswrapper[19803]: I0313 01:42:04.525185 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_961b4d54fbc741f185dfae043b7eaea5/kube-controller-manager/1.log" Mar 13 01:42:04.575901 master-0 kubenswrapper[19803]: I0313 01:42:04.575791 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_961b4d54fbc741f185dfae043b7eaea5/cluster-policy-controller/0.log" Mar 13 01:42:04.590475 master-0 kubenswrapper[19803]: I0313 01:42:04.590425 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_961b4d54fbc741f185dfae043b7eaea5/kube-controller-manager-cert-syncer/0.log" Mar 13 01:42:04.603777 master-0 kubenswrapper[19803]: I0313 01:42:04.603457 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-master-0_961b4d54fbc741f185dfae043b7eaea5/kube-controller-manager-recovery-controller/0.log" Mar 13 01:42:05.284153 master-0 kubenswrapper[19803]: I0313 01:42:05.284095 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-5dgb8_f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/kube-controller-manager-operator/2.log" Mar 13 01:42:05.290570 master-0 kubenswrapper[19803]: I0313 01:42:05.289781 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager-operator_kube-controller-manager-operator-86d7cdfdfb-5dgb8_f2f0667c-90d6-4a6b-b540-9bd0ab5973ea/kube-controller-manager-operator/1.log" Mar 13 01:42:06.440181 master-0 kubenswrapper[19803]: I0313 01:42:06.440053 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-4-master-0_7e4a6501-0f99-44fd-9ae8-41b1a5d1fd90/installer/0.log" Mar 13 01:42:06.454930 master-0 kubenswrapper[19803]: I0313 01:42:06.454892 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-master-0_a6d93d3d-2899-4962-a25a-712e2fb9584b/installer/0.log" Mar 13 01:42:06.474649 master-0 kubenswrapper[19803]: I0313 01:42:06.474602 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_installer-5-retry-1-master-0_ad71e4d6-32df-4ac5-acd2-e402cfef4611/installer/0.log" Mar 13 01:42:06.503782 master-0 kubenswrapper[19803]: I0313 01:42:06.503735 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler/0.log" Mar 13 01:42:06.521207 master-0 kubenswrapper[19803]: I0313 01:42:06.521156 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-cert-syncer/0.log" Mar 13 01:42:06.535066 master-0 kubenswrapper[19803]: I0313 01:42:06.535023 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/kube-scheduler-recovery-controller/0.log" Mar 13 01:42:06.549752 master-0 kubenswrapper[19803]: I0313 01:42:06.549698 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler_openshift-kube-scheduler-master-0_1453f6461bf5d599ad65a4656343ee91/wait-for-host-port/0.log" Mar 13 01:42:06.750097 master-0 kubenswrapper[19803]: I0313 01:42:06.750042 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-86f58fcf4-xjq7n_72706d51-8596-4a52-88bd-d994a8baad33/nmstate-console-plugin/0.log" Mar 13 01:42:06.764576 master-0 kubenswrapper[19803]: I0313 01:42:06.764529 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-xc24l_1da4232b-d161-4e9d-9e52-0c4663080dfd/nmstate-handler/0.log" Mar 13 01:42:06.777557 master-0 kubenswrapper[19803]: I0313 01:42:06.777493 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-tvqv9_1f9d5bff-035e-4b19-946a-c8c49fd43ebb/nmstate-metrics/0.log" Mar 13 01:42:06.787905 master-0 kubenswrapper[19803]: I0313 01:42:06.787863 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-9b8c8685d-tvqv9_1f9d5bff-035e-4b19-946a-c8c49fd43ebb/kube-rbac-proxy/0.log" Mar 13 01:42:06.809686 master-0 kubenswrapper[19803]: I0313 01:42:06.809624 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-796d4cfff4-rjb7j_c7c96cc6-98a5-467b-aed4-c50790caa51e/nmstate-operator/0.log" Mar 13 01:42:06.821224 master-0 kubenswrapper[19803]: I0313 01:42:06.821176 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-5f558f5558-8f7rs_b64722e0-860a-4f39-bca0-51cae9911bc0/nmstate-webhook/0.log" Mar 13 01:42:07.168436 master-0 kubenswrapper[19803]: I0313 01:42:07.168327 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-8fkz8_c6db75e5-efd1-4bfa-9941-0934d7621ba2/kube-scheduler-operator-container/2.log" Mar 13 01:42:07.173988 master-0 kubenswrapper[19803]: I0313 01:42:07.173940 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-scheduler-operator_openshift-kube-scheduler-operator-5c74bfc494-8fkz8_c6db75e5-efd1-4bfa-9941-0934d7621ba2/kube-scheduler-operator-container/3.log" Mar 13 01:42:07.868665 master-0 kubenswrapper[19803]: I0313 01:42:07.868563 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-57ccdf9b5-5zsh9_f771149b-9d62-408e-be6f-72f575b1ec42/migrator/0.log" Mar 13 01:42:07.880643 master-0 kubenswrapper[19803]: I0313 01:42:07.880603 19803 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-storage-version-migrator_migrator-57ccdf9b5-5zsh9_f771149b-9d62-408e-be6f-72f575b1ec42/graceful-termination/0.log"